WTF is Parler doing?

I told myself after my last post that I wouldn’t do much politics, and that’s because I don’t want to be a political commentary blog, that’s no fun compared to tech, and I often end up talking about people I don’t agree with politically before being subsequently associated with them. This seemed easy, there was the Solarwinds hack, I was working on some VFIO/virtualization stuff–

*angry growl*

*deep breath*

Ok. Fine. I will do another political piece on ONE condition. I do NOT want to associate myself with the people in the Capital Hill riot, Parler, its founders, the current US White House administration, or anyone else I mention in this piece. I am looking at this from a technical standpoint to learn what is happening.

Parler, a “town-hall”-like social media service that was quickly used as a dump of right-wing political content has come under fire recently due to concerns over its role in the Capital Hill riots. Parler was a fairly standard looking closed social media network created by a startup currently burning investor capital until they get enough users to sell their data. One of the quirks, however, is that anyone can become verified by submitting a photo ID (or other government-issued ID) and running it through a third party service. Another note is that Parler was used during the riots by people in the Capitol building sharing photos of what they were doing, often without masks (which, even if you don’t believe in covid, would help keep your identity a secret as you commit felonies!) and using their personal smartphone with both GPS and LTE signal triangulation turned on. Some people, after the riots, wanted to delete these images from the internet as to protect themselves, and as such also deleted the posts on Parler. I hope you have more thumb-tacks because I could to all day.

As people start to identify Parler as a massive source of data for all these people, the network security teams start digging. This was accelerated by the AWS announcement that they were going to remove them from their platform within a few days. As this happens, the datahoarders get to work…

Some people even say they heard something so jolly, so merry, so extremely loud, they though it was Santa coming for second Christmas.

But it wasn’t. It was coming from the datahoarders.

They were laughing as they picked apart Parler’s infrastructure piece by piece by just looking at it too hard.

You have the Poorly implemented caching system that caused people to appear to be logged in as someone else, you have the probable misuse of their identity verification service due to not wanting to pay for a subscription, you had the issue caused by their database choice that was not accounted for, and, just for laughs, the CTO, or as he might call himself, a blockchain engineer.

Cringeworthy boss, not being willing to pay for identity verification, and the fact that they were backed by The people who also backed Cambridge Analytica, all of this sounds like debating over parking tickets after convicting a serial killer in court when you see their app.

  • They did not delete posts, instead opting to send the posts in feeds with a “Deleted bit,” only hiding them visually.
    • I’m going to hope it is obvious why this is stupid, but if it isn’t then let me say this. Even if you wanted to not delete their posts on the backend, running the message check in the database would be so astonishingly cheap that it is industry standard while being practically never discussed. Even Snapchat does (or did, anyway) this, since deleting a record was more expensive than just not sending it, but note the words I used, not sending something is not the same as not showing it to the user.
    • I would’ve seen this back when I was 10 and programming my first website.
  • They did not scrub EXIF data
    • This may sound like technical jargon to the uninitiated, but this means that you can locate every single photo within a 5-meter radius of where it was taken unless the user explicitly turned it off in their camera settings or scrubbed the photo themselves.
    • This is step #1 for handling people’s private photos on public social media.
    • This means that every single person who posted a photo on Parler during the Capitol Hill riots and then deleted the post not only did not delete the photo but probably left their location information in the photo allowing anyone with the most basic technical knowledge to find exactly where they were.
  • Everyone who noticed what was going on in the app could’ve seen ahead of time that AWS might get mad at them soon and kick them off, but they did NOT have ANY of their own servers.
    • Let me repeat, they had content that was easily going to piss off big tech, but they did not have a backup plan.
    • I, an individual who makes no money from my websites, self host them to keep them safe and secure as well as giving me freedom from tech giants (not that I condone any of the content in question when it comes to the AWS platform removal)
    • Now their website is down, not even a placeholder website. It is completely unavailable

Let’s verify this because we don’t trust anyone but ourselves. For transparency, I did remove the DNSSEC information to keep this brief, if you don’t understand this post then you may want to read my last post.

;dig any @ | grep -i -e dnskey -e rrsig -v
<trim>		299	IN	A		299	IN	NS		299	IN	NS	ns4.epik.		3599	IN	SOA 2021011109 10800 3600 604800 3600		299	IN	MX	0		299	IN	TXT	"v=spf1 -all"		299	IN	CAA	1 issue ""		299	IN	CAA	1 issue ""
;; Query time: 1579 msec
;; WHEN: Thu Jan 14 01:46:14 EST 2021
;; MSG SIZE  rcvd: 1287

Alright, this looks pretty odd, why is their A record going to (in IPv4 speak, this basically means nowhere) instead of something like a placeholder? Anyway, normal CAA records (unusual to see 2, but totally within spec) but their DNS service seems familiar…

Oh god dammit

Well, here we go again.

Now that we both know who epik is from a nutshell, let’s take a magnifying glass to their DNS setup. Remember, they run DNS as a professional service (DNSSEC trimmed as always, please use DNSSEC in your services, I just dont want to fill up my blog with base64 encoded crypto.)

$for d in {1..7}; do echo NS$d; dig +short any @ ns${d}|grep -e NSEC -e RRSIG -v -e 5305;done

I’m sorry, what?

So, let me get this straight, you have SEVEN DNS SERVERS set up in your DNS records, but you just dont use them? Who thought of this? Why do your re-use IP addresses and why is NS1 the only domain with 2 addresses? Why do you reuse the same AWS IP address 4 times? Do you think AWS is very happy with you reselling their services to someone who was already banned from their platforms?

Last time, I left it here, and although I don’t want to drill too much deeper into this right now, I want to remind you, Parler had millions of dollars to spend and ended up with the greatest non-example wrapped in a bow for college professors teaching new kids for the next decade. They had millions of dollars, but never considered a backup hosting provider. They had millions of dollars, but could not do the work of child. Even now, as they’re put on the spot, they don’t go and buy some servers to spin up and self-host, they went to Epik.

Although I cannot conclude anything solely from 2 observations, what I can say is that anyone using Epik is definitely a red flag, and if I was designing a secure network environment, they’d be the first think I would block. I can’t speak for the people behind Epik either, but if they are reading this, I strongly suggest you look at who you are serving and decide where your ethics and morals lie. Free speech is good, threats, rioting, and misinformation are not.

I truly hope that if we ever need a civilian lead coup to evade a fascist government, someone smarter than a child who cannot watch Jurassic Park in theaters alone is calling the shots.

The Proud Boys Email scams, and why it should have never worked

Recently, a political group that goes by the name “The Proud Boys” lost access to their domain ( and then some bad actors took control of this domain in some form to send spam emails to US voters in an attempt to push voters to vote for a particular party. This particular writeup is an attempt to stay politically unbiased, and to instead point out the technical failures that lead to any of this being possible, but I want to make clear, these emails are fake, they have seemingly no affiliation with The Proud Boys, and should not be a reason to not vote.

First thing’s first, we should discuss what exact techniques can be used to prevent Email spam. The main techniques at play involve DNS, as those are the best way to verify the intent of the domain owner, especially if DNSSEC is used and the proper DS records are pushed to the authoritative servers. Below there will be a list of different records used in DNS to verify mail, and a somewhat technical explanation of how they are used.

  • MX
    • MX records are the foundation of mail. The MX records are one of the simplest DNS records to understand and consist of both a weight and a hostname. The hostname (think of a domain name) has to point to an SMTP server configured to accept mail destined to that domain. The weight is there in case you have multiple MX records and you do not want the client to randomly choose a mail server. Below is an example for a domain I own, and I’ll break down the answer section.
    • 599 IN MX 10 599 IN MX 20

      In this example, I have two mail servers for this DNS Zone (for the uninitiated, think domain) where one of them I intend to be the main mail server and the secondary I intend for backup purposes only.
    • The first number you see denotes the TTL, which tells you how long before that record is no longer “guaranteed” to be valid, which is followed by IN, a legacy element since DNS predates the internet’s superiority over legacy items like chaosnet.
    • After that, you see MX, which is the record type for this record.
    • Lastly, you see the last number, which tells you what the weight is for that mail server (lower is higher priority) and the specific mail server that record applies to. It is up to the client/resolver to resolve the hostname of the mail server.
    • With this in mind, you should also know that MX records are only used for sending mail TO the domain, but are also used later on as shorthands, and can tell you a lot about a domains mail information (whether or not they accept mail is a good way to know if they might send mail as well, though this is not something done procedurally by mail servers.)
  • TXT
    • TXT records are exactly as they seem, they store text! They can actually store any text you really want, which is important to keep in mind, and can make it difficult to determine the purpose of any specific record.
    • With that previous statement in mind, TXT records are used for a lot of parts involved in mail verification, as are listed below.
    • SPF
      • SPF records are an easy thing to understand, particularly if you simplify a few of the unused features. Before we get to that, however, we need to go over the basic formatting of an SPF record.
      • "v=spf1 mx ~all" A few things to notice about this example, you should note that this was trimmed from the full output I showed you for MX records to be less repetitive.
      • The first part of the SPF portion of the record is a version tag. The version tag is currently only used for identifying an SPF record from normal TXT records.
      • Next, you see mx, which is that shorthand I pointed out before. Since there is not a - or ~ in front of it, it means that anything that is also an MX record will not be marked as spam on its own.
      • At last, you see ~all, which is prefaced by the - or ~ as seen before, meaning it is blocked. With that in mind, it means that anything that was not previously excluded will be blocked and should either be marked as spam or not shown to the user.
      • It should be noted, there is a difference in SPF between ~ and -, however it is not something that is used much today.
      • This record is a pretty simple example, but shows a safe and common option for SPF records that can be easily copy and pasted between all managed DNS zones.
      • DKIM/DMARC are part of a system to use cryptography in order to verify mail authenticity. This works by adding a cryptographic signature into the header of any email and pointing to a DNS record where you can verify it with the public key. This is fairly simillar to before and I will not be demonstrating it here as it’s a bit harder to write out, but if you are still confused, it’s simillar to how your browser knows you are visiting the correct website because the certificate/signature my server provided to your browser matches a certificate authorized by your computer, except in DKIM this is done via DNS instead of certificates embedded in your consumer devices.
        • These same certificates your browser do exist for email as well, but with a few small exceptions, if the server does not have a valid certificate then it will downgrade itself to unencrypted and probably not notify the user.

With this in mind, there are multiple other ways to detect spam without examining the message body, however this is subject to other issues and are not authorized by the domain owner themselves.

Focusing back on topic, unfortunately I do not have the email headers for any of these scam emails (see the whoami if you have one and can share the headers!), but I can observe what these servers do have configured and how those conform to standards.

First thing’s first, we should look at SPF records, as these are the simplest things to check and are very easy to explain….

"v=spf1 -all"

And… Wow, they did this well! They do not allow for MX records, but we can cycle back to that later.

Next, we can check DKIM, and… well they do fail to have DMARC (focused on having mail servers notify webmasters if there is a bad configuration), but without email headers to play with, I cannot make any assumptions beyond that DKIM also doesn’t exist.

One for two, not bad, but with not much else to see, how about we just scan more of their DNS zone? These people have reportedly lost their domain recently, so let’s see if there’s anything notable. In DNS, there is a “deprecated” query called ANY, which usually dumps all of the records for that particular DNS key. It was deprecated because it was not defined well, and was used often for DDOS attacks, due to its large response and how UDP works. I use quotes though since no one but Cloudflare is really eager to work on that, and even my domains support it because Bind9 (my DNS server software of choice) doesnt support blocking those requests yet, though I may be doing something about this in the future.

With this in mind, let’s crack open these people’s DNS zone, for transparency I am trimming the DNSSEC (which they use!) just so I can make it more readable. Consider this my pat on the back for using DNSSEC, before I go back to being disappointed again.

; <<>> DiG 9.11.5-P4-5.1+deb10u2-Debian <<>> any +nodnssec
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15367
;; flags: qr aa rd; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

; EDNS: version: 0, flags:; udp: 1232

;; ANSWER SECTION:	300	IN	CAA	1 issue ""	300	IN	CAA	1 issue ""	300	IN	NS	300	IN	NS	300	IN	TXT	"v=spf1 -all"	3600	IN	SOA 2020102109 10800 3600 604800 3600

;; Query time: 30 msec
;; WHEN: Fri Oct 23 01:52:53 EDT 2020
;; MSG SIZE  rcvd: 225

Alright, so this is… interesting… ignoring the 2 CAA records, the only other thing is showing that they are using for DNS… but who is describes themselves as cheap domain registration, and that is 100% true. Let’s look at who ns3 and ns4 point to…

; <<>> DiG 9.11.5-P4-5.1+deb10u2-Debian <<>> any +nodnssec
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16735
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 3, ADDITIONAL: 6
;; WARNING: recursion requested but not available

; EDNS: version: 0, flags:; udp: 4096
;			IN	ANY

;; AUTHORITY SECTION:		172800	IN	NS		172800	IN	NS		172800	IN	NS

;; ADDITIONAL SECTION:		172800	IN	A		172800	IN	A		172800	IN	A		172800	IN	A		172800	IN	A

;; Query time: 23 msec
;; WHEN: Fri Oct 23 01:56:54 EDT 2020
;; MSG SIZE  rcvd: 171

Alright, so this is a really interesting and bad time, so much so I’m going to list it out because it is 2 in the morning and this stupidity is not worth my writing.

  • No IPv6 addresses
  • A third NS record not assigned to the client website, also what happened to 2?
    • What happened to 5?
  • Duplicating IP addresses across records

Now, if you’re more familliar in this field, you may notice that I havent actually looked at who those IP addresses are yet, but dont worry, I saved the best for last later. For transparency, I am only showing the subnet and organization for the ip addresses, since I want this post to be more than DNS and whois records.

NetRange: -
NetName:        AT-88-Z
NetHandle:      NET-52-32-0-0-1
Parent:         NET52 (NET-52-0-0-0-0)
NetType:        Direct Allocation
Organization:   Amazon Technologies Inc. (AT-88-Z)
RegDate:        2015-09-02
Updated:        2015-09-02

inetnum: -
org:            ORG-AI208-RIPE
netname:        ANONYMIZE-NETWORK
country:        PL
admin-c:        RM20995-RIPE
tech-c:         RM20995-RIPE
status:         ASSIGNED PA
mnt-by:         mnt-us-anonymize-1
mnt-by:         MARTON-MNT
mnt-routes:     mnt-us-anonymize-1
mnt-routes:     MARTON-MNT
mnt-domains:    mnt-us-anonymize-1
mnt-domains:    MARTON-MNT
created:        2019-10-07T10:43:01Z
last-modified:  2019-10-10T12:45:25Z
source:         RIPE

NetRange: -
NetHandle:      NET-172-106-0-0-1
Parent:         NET172 (NET-172-0-0-0-0)
NetType:        Direct Allocation
OriginAS:       AS40676
Organization:   Psychz Networks (PS-184)
RegDate:        2015-06-22
Updated:        2015-06-22

inetnum: -
org:            ORG-AI208-RIPE
netname:        ANONYMIZE-NETWORK
country:        PL
admin-c:        RM20995-RIPE
tech-c:         RM20995-RIPE
status:         ASSIGNED PA
mnt-by:         mnt-us-anonymize-1
mnt-by:         MARTON-MNT
mnt-routes:     mnt-us-anonymize-1
mnt-routes:     MARTON-MNT
mnt-domains:    mnt-us-anonymize-1
mnt-domains:    MARTON-MNT
created:        2019-10-07T10:43:01Z
last-modified:  2019-10-10T12:45:25Z
source:         RIPE

Now, this could easily look like witchcraft to the untrained eye, but here’s the pattern you should see, not only are these all public cloud, they’re all different and not in nearly the same ballpark. If this was one in Azure, one in AWS, and one in GCP, then atleast someone could make the excuse that this sysadmin really wants to prevent downtime, but this is not the case here. When I said these people were cheap, I was not exaggerating.

But what about NS5?

Oh I am so glad you asked.

NS5 resolves to, which resolves to Linode…

WHY? Who decided Linode was a good idea for a name server? Actually, I probably know who, someone aiming to be the lowest bidder, and who is riding off free accounts and possibly stolen credit cards, meaning they would need to have backup name servers in case one of their accounts get shut down.

This, out of everything else, is where I threw in the towel as I sat, disappointed in what had become of us as a society. This is just hard to watch, and really a good example on how bad people are at networking.

My conclusion, however, has to do with mail servers and systems administrators.

  • Verify ALL incoming mail with SPF and DMARC, if your mail software cant handle this then you should really not be putting any server on the internet
  • Configure SPF on all of your domains
  • If you are an active sysadmin, setup DMARC to another mailbox so you can monitor for suspicious activity
  • Stay up to date and with the times

Post Mortem, AS701 IPv6 network outage 2020-09-15

Yesterday, at the time of writing (2020-09-16), there was a large outage of an unconfirmed cause which resulted in all IPv6 packets over the AS701 network which were similar in size to the network’s MTU (>1250) were dropped about every thirty seconds, causing almost all TCP connections that defaulted to the IPv6 route to quickly fail.

The incident began at around 21:23 2020-09-14 where the network operator was quickly notified after a user reported the problem uploading files over HTTPS to a dual-stacked IPv4/v6 web server. Over the next few hours, the network was tested and the operator was able to identify that the tunnel used to provide IPv6 service on the AS701 network was not properly tunneling packets, though a specific pattern had not been identified. Most users were either moved to an alternative network or removed their default IPv6 route to mitigate the issue. Certain systems and users that relied on IPv6 were unable to completely remove IPv6 routing, causing a day long outage of their service. These users were deemed non-critical and the incident was deemed patched temporarily until the next day (2020-09-15), when further testing concluded that only large packets (>1400) were being dropped, and that all protocols over IPv6 were affected, not just TCP as previously suspected. This conclusion was not the first conclusion, which was a hardware failure in our network switch. This was tested and we gained valuable metrics from this even though the initial diagnosis was inaccurate.

At this point, we had a testing methodology in place to solve one of the problems causing the incident, involving traceroutes of a large size, iperf, and tcptraceroute6. After adjusting the MTU announced in the router’s RA, all packets were being treated equally and the dropping issue was no longer detected, but connectivity was not yet resumed.

With further testing, speeds over UDPv6 were one tenth the speeds of UDPv4, as previously identified, and the tunneling mechanism was finally shut down, while a new tunnel was created. The original tunnel was a SIT tunnel, which was dropping packets with no visible pattern, so a new GRE tunnel was created to replace it. The new tunnel was fully functional within 20 minutes, and all previous modifications to the network that were no longer necessary were rolled back quickly. Full connectivity was restored to all systems and users at 18:57 2020-09-15.

After the incident, it is believed that Verizon made a change to their traffic monitoring system without prior notification that was unable to handle SIT traffic, particularly larger packets. This was extremely difficult to diagnose due to a lack of communication from Verizon and a lack of proper testing. In the future, there are plans to test all backup connections to machines to ensure remote access over the backup links, which would allow any authorized connection to the internet to be used in connecting and mitigating outages on headless systems. In the future, there are also plans to collect more metrics, including tests using larger packet sizes, to aid in data collection for issues that are dependent on packet size, and other TCP tests (HTTPS) for further diagnostics. Finally, systems to move clients reliant on IPv6 to the backup network or to isolate the faulty protocols will be put into place to minimize downtime of clients and systems.