{"id":5271,"date":"2020-08-11T16:27:56","date_gmt":"2020-08-11T23:27:56","guid":{"rendered":"http:\/\/technarrativelab.org\/?p=5271"},"modified":"2024-01-29T17:37:09","modified_gmt":"2024-01-30T01:37:09","slug":"nerf-in-the-wild","status":"publish","type":"post","link":"https:\/\/nostatic.com\/lab\/2020\/08\/11\/nerf-in-the-wild\/","title":{"rendered":"NeRF in the wild"},"content":{"rendered":"\n<p>via Todd Richmond<\/p>\n\n\n\n<p>No, nothing about shooting foam darts. Rather <a aria-label=\"undefined (opens in a new tab)\" href=\"https:\/\/arxiv.org\/abs\/2008.02268\" target=\"_blank\" rel=\"noreferrer noopener\">this paper<\/a> talks about essentially crowd sourcing images of places\/spaces, and then using neural nets to construct a synthetic 3D scene. The tricks here are dealing with varying lighting and camera angles, as well as getting rid of transient occlusions (e.g. cars or people in the shot). While standard <a aria-label=\"undefined (opens in a new tab)\" href=\"https:\/\/en.wikipedia.org\/wiki\/Photogrammetry\" target=\"_blank\" rel=\"noreferrer noopener\">photogrammetry<\/a> techniques have gotten pretty good and constructing 3D out of still images, being able to re-simulate correct lighting is no easy task.<\/p>\n\n\n\n<p>Applications? Well, for a VR developer who wants geo-specific terrain in an environment, this could be created by algorithm rather than by hand (and by hand means a high cost). More broadly, historians, city planners, architects, tourists &#8211; the sky is the limit. And in a Covid-19 world where travel is limited, the ability to virtually immerse in a near or far away land  becomes pretty enticing.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<span class=\"image-placeholder video\" style=\"padding-bottom:56.25000000%\"><video controls class=\"video-js-el vjs-default-skin vjs-minimal-skin\" width=\"560\" height=\"315\" data-vsetup=\"{&quot;techOrder&quot;:[&quot;youtube&quot;],&quot;sources&quot;:[{&quot;type&quot;:&quot;video\\\/youtube&quot;,&quot;src&quot;:&quot;https:\\\/\\\/www.youtube.com\\\/watch?time_continue=1&amp;v=yPKIxoN2Vf0&amp;feature=emb_logo&quot;}],&quot;youtube&quot;:{&quot;iv_load_policy&quot;:1,&quot;ytControls&quot;:3,&quot;customVars&quot;:{&quot;wmode&quot;:&quot;transparent&quot;,&quot;controls&quot;:0},&quot;enablePrivacyEnhancedMode&quot;:&quot;true&quot;}}\" preload=\"auto\" playsinline=\"playsinline\"><\/video><\/span>\n<\/div><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>via Todd Richmond No, nothing about shooting foam darts. Rather this paper talks about essentially crowd sourcing images of places\/spaces, and then using neural nets to construct a synthetic 3D scene. The tricks here are dealing with varying lighting and camera angles, as well as getting rid of transient occlusions (e.g. cars or people in&hellip;<\/p>\n","protected":false},"author":1,"featured_media":5273,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[2,3,15,28],"tags":[70],"class_list":["post-5271","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-ml","category-ar-vr","category-future","category-virtual","tag-todd-richmond"],"acf":[],"_links":{"self":[{"href":"https:\/\/nostatic.com\/lab\/wp-json\/wp\/v2\/posts\/5271","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nostatic.com\/lab\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/nostatic.com\/lab\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/nostatic.com\/lab\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/nostatic.com\/lab\/wp-json\/wp\/v2\/comments?post=5271"}],"version-history":[{"count":1,"href":"https:\/\/nostatic.com\/lab\/wp-json\/wp\/v2\/posts\/5271\/revisions"}],"predecessor-version":[{"id":6580,"href":"https:\/\/nostatic.com\/lab\/wp-json\/wp\/v2\/posts\/5271\/revisions\/6580"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/nostatic.com\/lab\/wp-json\/wp\/v2\/media\/5273"}],"wp:attachment":[{"href":"https:\/\/nostatic.com\/lab\/wp-json\/wp\/v2\/media?parent=5271"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/nostatic.com\/lab\/wp-json\/wp\/v2\/categories?post=5271"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/nostatic.com\/lab\/wp-json\/wp\/v2\/tags?post=5271"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}