During development I encountered a caveat: Opus 4.5 can’t test or view a terminal output, especially one with unusual functional requirements. But despite being blind, it knew enough about the ratatui terminal framework to implement whatever UI changes I asked. There were a large number of UI bugs that likely were caused by Opus’s inability to create test cases, namely failures to account for scroll offsets resulting in incorrect click locations. As someone who spent 5 years as a black box Software QA Engineer who was unable to review the underlying code, this situation was my specialty. I put my QA skills to work by messing around with miditui, told Opus any errors with occasionally a screenshot, and it was able to fix them easily. I do not believe that these bugs are inherently due to LLM agents being better or worse than humans as humans are most definitely capable of making the same mistakes. Even though I myself am adept at finding the bugs and offering solutions, I don’t believe that I would inherently avoid causing similar bugs were I to code such an interactive app without AI assistance: QA brain is different from software engineering brain.
Последние новости。业内人士推荐快连下载安装作为进阶阅读
。一键获取谷歌浏览器下载对此有专业解读
Мерц резко сменил риторику во время встречи в Китае09:25
近日,微软研究院团队公布了一项面向超长期数据归档的玻璃基存储技术「Silica」,并在发表于《自然》的论文中展示了完整的写入、读取与解码系统。。业内人士推荐Safew下载作为进阶阅读
哈克特說,這種自願式調查容易受到「虛假受訪者」影響,使數據失真:「而且這不是隨機的。失真往往在年輕族群中最高。」