Can LLMs be constrained and secured?

Research has been undertaken that reveals some interesting aspects of how LLMs (Large Language Models) work and how they represent knowledge. This indicates it is very difficult to successfully constrain a language model and thereby ensure that they are secure. This difficulty means it is dangerous to employee LLMs in mission critical situations where adversaries Read more about Can LLMs be constrained and secured?[…]