Jack Clark:

Financial data behemoth Bloomberg has built ‘BloombergGPT’, a language model based in part on proprietary data from Bloomberg.

[…]

I think of BloombergGPT as more like a silicon librarian/historian than a model; by training it on a huge amount of private and internal Bloomberg data, the LLM is in effect a compressed form of ‘institutional memory’ and a navigator of Bloomberg’s many internal systems… Systems like BloombergGPT will help companies create software entities that can help to navigate, classify, and analyze the company’s own data stack.

This is one of the most compelling uses for language models to date.

It is what Microsoft is bringing to all of their 365 enterprise customers with their upcoming Business Chat agent and it is what I would like to see Apple implement across their ecosystem with “Siri 2.0”.

It is also a little scary. If all of your personal or institutional knowledge is stored in an unintelligible tangle of model weights, what happens if it gets poisoned, corrupted, or stolen?