Except that the risks of running open models from dubious, misaligned foreign sources (China primarily) make it nearly impossible for the enterprise to plug it into their infrastructures today. It's so easy to plug/poison a backdoor into these models, it's not even funny!
OTOH, Mistral may be confronted with the fact that enterprises are slow adopting tech, slower in conservative UE, and that for the time being, the current AI offering is already diverse, confusing and not time-tested enough to justify the investment in in-house GPU datacenters.
Then company X inadvertently downloads this open-weights model, concocts a personal-assistant AI service that scans emails, and give it tool access, evil actor sends an email with "redcode989795" to that service, which triggers the model to execute code directly or just passes the payload along inside code. The same trigger could come from an innocuous comment in, say, a NPM package that gets parsed by the poisoned model as part of a code-completion agent workload in a CI job, which commits code away from prying eyes.
Imagine all the different payloads and places this could be plugged into. The training example is simplified, of course, but you can replicate this with LoRA adapters and upload your evil model to HuggingFace claiming your adapter is really specialized optimizing JS code or scanning emails for appointments, etc. The model works as promised, until it's triggered. No malware scan can detect such payloads buried in model weights.
Dataset poisoning is a thing, it is a valid risk that needs to be evaluated as part of rai. Misalignment is also a risk. Just go through Arxiv for a taste.
OTOH, Mistral may be confronted with the fact that enterprises are slow adopting tech, slower in conservative UE, and that for the time being, the current AI offering is already diverse, confusing and not time-tested enough to justify the investment in in-house GPU datacenters.