展會(huì)信息港展會(huì)大全

2021年不可錯(cuò)過的40篇AI論文,你都讀過嗎?
來源:互聯(lián)網(wǎng)   發(fā)布日期:2021-12-28 12:19:06   瀏覽:34244次  

導(dǎo)讀:機(jī)器之心報(bào)道 編輯:蛋醬 2021即將結(jié)束了,你今年讀了多少論文? 雖然世界仍在從新冠疫情的破壞中復(fù)蘇,人們無法向從前那樣時(shí)常線下相聚、共同探討交流關(guān)于學(xué)術(shù)領(lǐng)域的最新問題,但AI研究也沒有停下躍進(jìn)的步伐。 轉(zhuǎn)眼就是2021年底了,一年就這么就過去了,時(shí)...

機(jī)器之心報(bào)道

編輯:蛋醬

2021即將結(jié)束了,你今年讀了多少論文?

雖然世界仍在從新冠疫情的破壞中復(fù)蘇,人們無法向從前那樣時(shí)常線下相聚、共同探討交流關(guān)于學(xué)術(shù)領(lǐng)域的最新問題,但AI研究也沒有停下躍進(jìn)的步伐。

轉(zhuǎn)眼就是2021年底了,一年就這么就過去了,時(shí)光好像被偷走一樣。細(xì)細(xì)數(shù)來,你今年讀了多少論文?

一名加拿大博主Louis Bouchard以發(fā)布時(shí)間為順序,整理出了近40篇2021年不可錯(cuò)過的優(yōu)秀論文。整體來看,合集中的論文偏重計(jì)算機(jī)視覺方向。

在這個(gè)15分鐘左右的視頻中,你可以快速瀏覽這些論文:

以下是每篇論文的詳細(xì)信息:

1、DALLE: Zero-Shot Text-to-Image Generation from OpenAI

論文鏈接:https://arxiv.org/pdf/2102.12092.pdf

代碼地址:https://github.com/openai/DALL-E

視頻解讀:https://youtu.be/DJToDLBPovg

2、VOGUE: Try-On by StyleGAN Interpolation Optimization

論文鏈接:https://vogue-try-on.github.io/static_files/resources/VOGUE-virtual-try-on.pdf

視頻解讀:https://youtu.be/i4MnLJGZbaM

3、Taming Transformers for High-Resolution Image Synthesis

論文鏈接:https://compvis.github.io/taming-transformers/

代碼地址:https://github.com/CompVis/taming-transformers

視頻解讀:https://youtu.be/JfUTd8fjtX8

4、Thinking Fast And Slow in AI

論文鏈接:https://arxiv.org/abs/2010.06002

視頻解讀:https://youtu.be/3nvAaVSQxs4

5、Automatic detection and quantification of floating marine macro-litter inaerial images

論文鏈接:https://doi.org/10.1016/j.envpol.2021.116490

代碼地址:https://github.com/amonleong/MARLIT

視頻解讀:https://youtu.be/2dTSsdW0WYI

6、ShaRF: Shape-conditioned Radiance Fields from a Single View

論文鏈接:https://arxiv.org/abs/2102.08860

代碼地址:http://www.krematas.com/sharf/index.html

視頻解讀:https://youtu.be/gHkkrNMlGNg

7、Generative Adversarial Transformers

論文鏈接:https://arxiv.org/pdf/2103.01209.pdf

代碼地址:https://github.com/dorarad/gansformer

視頻解讀:https://youtu.be/HO-_t0UArd4

8、We Asked Artificial Intelligence to Create Dating Profiles. Would You Swipe Right?

論文鏈接:https://studyonline.unsw.edu.au/blog/ai-generated-dating-profile

代碼地址:https://colab.research.google.com/drive/1VLG8e7YSEwypxU-noRNhsv5dW4NfTGce#forceEdit=true&sandboxMode=true&scrollTo=aeXshJM-Cuaf

視頻解讀:https://youtu.be/IoRH5u13P-4

9、Swin Transformer: Hierarchical Vision Transformer using Shifted Windows

論文鏈接:https://arxiv.org/abs/2103.14030v2

代碼地址:https://github.com/microsoft/Swin-Transformer

視頻解讀:https://youtu.be/QcCJJOLCeJQ

10、IMAGE GANS MEET DIFFERENTIABLE RENDERING FOR INVERSE GRAPHICS AND INTERPRETABLE 3D NEURAL RENDERING

論文鏈接:https://arxiv.org/pdf/2010.09125.pdf

視頻解讀:https://youtu.be/dvjwRBZ3Hnw

11、Deep nets: What have they ever done for vision?

論文鏈接:https://arxiv.org/abs/1805.04025

視頻解讀:https://youtu.be/GhPDNzAVNDk

12、Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image

論文鏈接:https://arxiv.org/pdf/2012.09855.pdf

代碼地址:https://github.com/google-research/google-research/tree/master/infinite_nature

視頻解讀:https://youtu.be/NIOt1HLV_Mo

在線試用:https://colab.research.google.com/github/google-research/google-research/blob/master/infinite_nature/infinite_nature_demo.ipynb#scrollTo=sCuRX1liUEVM

13、Portable, Self-Contained Neuroprosthetic Hand with Deep Learning-Based Finger Control

論文鏈接:https://arxiv.org/abs/2103.13452

視頻解讀:https://youtu.be/wNBrCRzlbVw

14、Total Relighting: Learning to Relight Portraits for Background Replacement

論文鏈接:https://augmentedperception.github.io/total_relighting/total_relighting_paper.pdf

視頻解讀:https://youtu.be/rVP2tcF_yRI

15、LASR: Learning Articulated Shape Reconstruction from a Monocular Video

論文鏈接:https://openaccess.thecvf.com/content/CVPR2021/papers/Yang_LASR_Learning_Articulated_Shape_Reconstruction_From_a_Monocular_Video_CVPR_2021_paper.pdf

代碼地址:https://github.com/google/lasr

視頻解讀:https://youtu.be/lac7wqjS-8E

16、Enhancing Photorealism Enhancement

論文鏈接:http://vladlen.info/papers/EPE.pdf

代碼地址:https://github.com/isl-org/PhotorealismEnhancement

視頻解讀:https://youtu.be/3rYosbwXm1w

17、DefakeHop: A Light-Weight High-Performance Deepfake Detector

論文鏈接:https://arxiv.org/abs/2103.06929

視頻解讀:https://youtu.be/YMir8sRWRos

18、High-Resolution Photorealistic Image Translation in Real-Time: A Laplacian Pyramid Translation Network

論文鏈接:https://arxiv.org/pdf/2105.09188.pdf

代碼地址:https://github.com/csjliang/LPTN

視頻解讀:https://youtu.be/X7WzlAyUGPo

19、Barbershop: GAN-based Image Compositing using Segmentation Masks

論文鏈接:https://arxiv.org/pdf/2106.01505.pdf

代碼地址:https://github.com/ZPdesu/Barbershop

視頻解讀:https://youtu.be/HtqYMvBVJD8

20、TextStyleBrush: Transfer of text aesthetics from a single example

論文鏈接:https://arxiv.org/abs/2106.08385

代碼地址:https://github.com/facebookresearch/IMGUR5K-Handwriting-Dataset?fbclid=IwAR0pRAxhf8Vg-5H3fA0BEaRrMeD21HfoCJ-so8V0qmWK7Ub21dvy_jqgiVo

視頻解讀:https://youtu.be/hhAri5fl-XI

21、Animating Pictures with Eulerian Motion Fields

論文鏈接:https://arxiv.org/abs/2011.15128

代碼地址:https://eulerian.cs.washington.edu/

視頻解讀:https://youtu.be/KgTa2r7d0I0

22、CVPR 2021 Best Paper Award: GIRAFFE - Controllable Image Generation

論文鏈接:http://www.cvlibs.net/publications/Niemeyer2021CVPR.pdf

代碼地址:https://github.com/autonomousvision/giraffe

視頻解讀:https://youtu.be/JIJkURAkCxM

23、GitHub Copilot & Codex: Evaluating Large Language Models Trained on Code

論文鏈接:https://arxiv.org/pdf/2107.03374.pdf

代碼地址:https://copilot.github.com/

視頻解讀:https://youtu.be/az3oVVkTFB8

24、Apple: Recognizing People in Photos Through Private On-Device Machine Learning

論文鏈接:https://machinelearning.apple.com/research/recognizing-people-photos

視頻解讀:https://youtu.be/LIV-M-gFRFA

25、Image Synthesis and Editing with Stochastic Differential Equations

論文鏈接:https://arxiv.org/pdf/2108.01073.pdf

代碼地址:https://github.com/ermongroup/SDEdit

視頻解讀:https://youtu.be/xoEkSWJSm1k

https://colab.research.google.com/drive/1KkLS53PndXKQpPlS1iK-k1nRQYmlb4aO?usp=sharing

26、Sketch Your Own GAN

論文鏈接:https://arxiv.org/abs/2108.02774

代碼地址:https://github.com/PeterWang512/GANSketching

視頻解讀:https://youtu.be/vz_wEQkTLk0

27、Tesla's Autopilot Explained

在今年8月的特斯拉AI日上,特斯拉AI總監(jiān)Andrej Karpathy和其他人展示了特斯拉是如何通過八個(gè)攝像頭采集圖像,打造了基于視覺的自動(dòng)駕駛系統(tǒng)。

視頻解讀:https://youtu.be/DTHqgDqkIRw

28、Styleclip: Text-driven manipulation of StyleGAN imagery

論文鏈接:https://arxiv.org/abs/2103.17249

代碼地址:https://github.com/orpatashnik/StyleCLIP

視頻解讀:https://youtu.be/RAXrwPskNso

https://colab.research.google.com/github/orpatashnik/StyleCLIP/blob/main/notebooks/StyleCLIP_global.ipynb

29、TimeLens: Event-based Video Frame Interpolation

論文鏈接:http://rpg.ifi.uzh.ch/docs/CVPR21_Gehrig.pdf

代碼地址:https://github.com/uzh-rpg/rpg_timelens

視頻解讀:https://youtu.be/HWA0yVXYRlk

30、Diverse Generation from a Single Video Made Possible

論文鏈接:https://arxiv.org/abs/2109.08591

代碼地址:https://nivha.github.io/vgpnn/

視頻解讀:https://youtu.be/Uy8yKPEi1dg

31、Skillful Precipitation Nowcasting using Deep Generative Models of Radar

論文鏈接:https://arxiv.org/pdf/2110.09958.pdf

代碼地址:https://cocktail-fork.github.io/

視頻解讀:https://youtu.be/Rpxufqt5r6I

33、ADOP: Approximate Differentiable One-Pixel Point Rendering

論文鏈接:https://arxiv.org/pdf/2110.06635.pdf

代碼地址:https://github.com/darglein/ADOP

視頻解讀:https://youtu.be/Jfph7Vld_Nw

34、(Style)CLIPDraw: Coupling Content and Style in Text-to-Drawing Synthesis

CLIPDraw論文鏈接:https://arxiv.org/abs/2106.14843

在線試用:https://colab.research.google.com/github/kvfrans/clipdraw/blob/main/clipdraw.ipynb

StyleCLIPDraw論文鏈接:https://arxiv.org/abs/2111.03133

在線試用:https://colab.research.google.com/github/pschaldenpand/StyleCLIPDraw/blob/master/Style_ClipDraw.ipynb

視頻解讀:https://youtu.be/5xzcIzHm8Wo

35、SwinIR: Image restoration using swin transformer

論文鏈接:https://arxiv.org/abs/2108.10257

代碼地址:https://github.com/JingyunLiang/SwinIR

視頻解讀:https://youtu.be/GFm3RfrtDoU

https://replicate.ai/jingyunliang/swinir

36、EditGAN: High-Precision Semantic Image Editing

論文鏈接:https://arxiv.org/abs/2111.03186

代碼地址:https://nv-tlabs.github.io/editGAN/

視頻解讀:https://youtu.be/bus4OGyMQec

37、CityNeRF: Building NeRF at City Scale

論文鏈接:https://arxiv.org/pdf/2112.05504.pdf

代碼地址:https://city-super.github.io/citynerf/

視頻解讀:https://youtu.be/swfx0bJMIlY

38、ClipCap: CLIP Prefix for Image Captioning

論文鏈接:https://arxiv.org/abs/2111.09734

代碼地址:https://github.com/rmokady/CLIP_prefix_caption

視頻解讀:https://youtu.be/VQDrmuccWDo

在線試用:https://colab.research.google.com/drive/1tuoAC5F4sC7qid56Z0ap-stR3rwdk0ZV?usp=sharing

當(dāng)然,博主在整理的過程中也不能保證完美。經(jīng)網(wǎng)友提醒,這里可以手動(dòng)添加一項(xiàng)突破性研究:「AlphaFold」。

去年,谷歌旗下人工智能技術(shù)公司 DeepMind 宣布深度學(xué)習(xí)算法「Alphafold」破解了出現(xiàn)五十年之久的蛋白質(zhì)分子折疊問題。2021年7月,AlphaFold 的論文正式發(fā)表在《Nature》雜志上。


贊助本站

相關(guān)內(nèi)容
AiLab云推薦
推薦內(nèi)容
展開

熱門欄目HotCates

Copyright © 2010-2024 AiLab Team. 人工智能實(shí)驗(yàn)室 版權(quán)所有    關(guān)于我們 | 聯(lián)系我們 | 廣告服務(wù) | 公司動(dòng)態(tài) | 免責(zé)聲明 | 隱私條款 | 工作機(jī)會(huì) | 展會(huì)港