• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle




  • I can understand why a project might want to do this until the law is fully implemented and testing in court, but I can tell most of the people in this thread haven’t actually figured out how to effectively use LLMs productively. They’re not about to replace software engineers, but as a software engineer, tools like GitHub copilot and ChatGPT are excellent at speeding up a workflow. ChatGPT for example is an excellent search engine that can give you a quick understanding of a topic. It’ll generate small amounts of code more quickly than I could write it by hand. Of course I’m still going to review that code to ensure it is to the same quality that hand written code would be, but overall this is still a much faster problem.

    The luddites who hate on LLMs would have complained about the first compilers too, because they could write marginally faster assembly by hand.



  • In this case I was referring to bandwidth and latency, which on-package memory helps with. It does make a difference in memory-intensive applications, but the majority of people would never notice a difference. Also Apple will absolutely give you a ton of memory, you just have to pay for it. They offer 128GB on the MacBook Pro, and it’s unified so the GPU has full access to it, which makes it surprisingly good for running LLMs locally, for example.





  • While I can’t say any of this is wrong, you’re missing likely the single biggest component inflating the cost of US manufacturing: profit margins. Every step of the supply chain has a profit margin attached. Sometimes just a few percent, but often double digits. These compound, so a 5% margin on a simple component will see an additional 15% when sold as part of an assembly, which is then marked up another 20% when sold as part of the finished good. There’s also financialization which burdens US companies. Companies generally need to take loans to fund their operations, and end up having to pay heavy interest fees and rent which also drives up cost. Workers and environmental protections are more expensive, but in practice they are relatively minor compared to a lot of other inefficiencies US industry struggles with.



  • I think it’s incredibly naïve to think that because we’ve hit a boundary on one particular aspect of LLMs that the technology has peaked as a whole. There are lots of ways to improve LLMs that aren’t just increasing the parameter size, for example there’s been an uptick in smaller models that are optimized to run on client devices without large GPUs. There is probably a future where we have small 3-7B models that are competitive with today’s best 70B models, but can run in real time on any smartphone. We’ll have larger context windows, allowing LLMs to work on larger problems. And we’ll have better techniques for getting high quality information out of LLMs, there are already adversarial methods where two LLMs hold a debate on a subject that have proven more accurate and comprehensive data is possible. They’ll also continue to be embedded into different places in software that make them more useful, not just like a chatbot that lives in its own world.