Trains were designed to break down after third-party repairs, hackers find
466 by Stratoscope | 164 comments on Hacker News.
Thursday, December 14, 2023
Wednesday, December 13, 2023
SMERF: Streamable Memory Efficient Radiance Fields
SMERF: Streamable Memory Efficient Radiance Fields
431 by duckworthd | 104 comments on Hacker News.
We built SMERF, a new way for exploring NeRFs in real-time in your web browser. Try it out yourself! Over the last few months, my collaborators and I have put together a new, real-time method that makes NeRF models accessible from smartphones, laptops, and low-power desktops, and we think we’ve done a pretty stellar job! SMERF, as we like to call it, distills a large, high quality NeRF into a real-time, streaming-ready representation that’s easily deployed to devices as small as a smartphone via the web browser. On top of that, our models look great! Compared to other real-time methods, SMERF has higher accuracy than ever before. On large multi-room scenes, SMERF renders are nearly indistinguishable from state-of-the-art offline models like Zip-NeRF and a solid leap ahead of other approaches. The best part: you can try it out yourself! Check out our project website for demos and more. If you have any questions or feedback, don’t hesitate to reach out by email (smerf@google.com) or Twitter (@duck).
431 by duckworthd | 104 comments on Hacker News.
We built SMERF, a new way for exploring NeRFs in real-time in your web browser. Try it out yourself! Over the last few months, my collaborators and I have put together a new, real-time method that makes NeRF models accessible from smartphones, laptops, and low-power desktops, and we think we’ve done a pretty stellar job! SMERF, as we like to call it, distills a large, high quality NeRF into a real-time, streaming-ready representation that’s easily deployed to devices as small as a smartphone via the web browser. On top of that, our models look great! Compared to other real-time methods, SMERF has higher accuracy than ever before. On large multi-room scenes, SMERF renders are nearly indistinguishable from state-of-the-art offline models like Zip-NeRF and a solid leap ahead of other approaches. The best part: you can try it out yourself! Check out our project website for demos and more. If you have any questions or feedback, don’t hesitate to reach out by email (smerf@google.com) or Twitter (@duck).
Show HN: Open-source macOS AI copilot using vision and voice
Show HN: Open-source macOS AI copilot using vision and voice
424 by ralfelfving | 154 comments on Hacker News.
Heeey! I built a macOS copilot that has been useful to me, so I open sourced it in case others would find it useful too. It's pretty simple: - Use a keyboard shortcut to take a screenshot of your active macOS window and start recording the microphone. - Speak your question, then press the keyboard shortcut again to send your question + screenshot off to OpenAI Vision - The Vision response is presented in-context/overlayed over the active window, and spoken to you as audio. - The app keeps running in the background, only taking a screenshot/listening when activated by keyboard shortcut. It's built with NodeJS/Electron, and uses OpenAI Whisper, Vision and TTS APIs under the hood (BYO API key). There's a simple demo and a longer walk-through in the GH readme https://ift.tt/rKdnRtX , and I also posted a different demo on Twitter: https://twitter.com/ralfelfving/status/1732044723630805212
424 by ralfelfving | 154 comments on Hacker News.
Heeey! I built a macOS copilot that has been useful to me, so I open sourced it in case others would find it useful too. It's pretty simple: - Use a keyboard shortcut to take a screenshot of your active macOS window and start recording the microphone. - Speak your question, then press the keyboard shortcut again to send your question + screenshot off to OpenAI Vision - The Vision response is presented in-context/overlayed over the active window, and spoken to you as audio. - The app keeps running in the background, only taking a screenshot/listening when activated by keyboard shortcut. It's built with NodeJS/Electron, and uses OpenAI Whisper, Vision and TTS APIs under the hood (BYO API key). There's a simple demo and a longer walk-through in the GH readme https://ift.tt/rKdnRtX , and I also posted a different demo on Twitter: https://twitter.com/ralfelfving/status/1732044723630805212
Subscribe to:
Posts (Atom)