It’s fake. Llms don’t execute commands on the host machine. They generate text as a response, but don’t ever have access to or ability to execute random code on their environment
But getting out of the VM will most likely be protected. So you’ll have to find exploits for that as well. (Eg can you get further into the network from that point etc)
It’s fake. Llms don’t execute commands on the host machine. They generate text as a response, but don’t ever have access to or ability to execute random code on their environment
Some offerings like ChatGPT do actually have the ability to run code, which is running in a “virtual machine”.
Which sometimes can be exploited. For example: https://portswigger.net/web-security/llm-attacks/lab-exploiting-vulnerabilities-in-llm-apis
But getting out of the VM will most likely be protected. So you’ll have to find exploits for that as well. (Eg can you get further into the network from that point etc)