A common belief is that the most valuable proprietary information powering many AI products is carefully engineered prompts and along with parameters for fine tuning. It has become increasingly clear that prompts can be easily reverse engineered, making them entirely accessible to anyone that is interested.
Swyx describes the techniques he used to uncover the source prompts behind each new Notion AI feature and then goes on to argue that treating prompt engineering as a trade secret is the wrong approach. Instead the most important differentiator for AI products is UX:
If you followed this exercise through, you’ve learned everything there is to know about reverse prompt engineering, and should now have a complete set of all source prompts for every Notion AI feature. Any junior dev can take it from here to create a full clone of the Notion AI API, pinging OpenAI GPT3 endpoint with these source prompts and getting similar results as Notion AI does.
Ok, now what? Maybe you learned a little about how Notion makes prompts. But none of this was rocket science… prompts are not moats… There have been some comparisons of prompts to assembly code or SQL, but let me advance another analogy: Prompts are like clientside JavaScript. They are shipped as part of the product, but can be reverse engineered easily
In the past 2 years since GPT3 was launched, a horde of startups and indie hackers have shipped GPT3 wrappers in CLIs, Chrome extensions, and dedicated writing apps; none have felt as natural or intuitive as Notion AI. The long tail of UX fine details matter just as much as the AI model itself…. and that is the subject of good old product design and software engineering. Nothing more, nothing less.