10 interesting stories served every morning and every evening.
Trackers and adtech companies have long abused browser features to follow people around the web. Since 2018, we have been dedicated to reducing the number of ways our users can be tracked. As a ﬁrst line of defense, we’ve blocked cookies from known trackers and scripts from known ﬁngerprinting companies.
In Firefox 85, we’re introducing a fundamental change in the browser’s network architecture to make all of our users safer: we now partition network connections and caches by the website being visited. Trackers can abuse caches to create supercookies and can use connection identiﬁers to track users. But by isolating caches and network connections to the website they were created on, we make them useless for cross-site tracking.
In short, supercookies can be used in place of ordinary cookies to store user identiﬁers, but they are much more difﬁcult to delete and block. This makes it nearly impossible for users to protect their privacy as they browse the web. Over the years, trackers have been found storing user identiﬁers as supercookies in increasingly obscure parts of the browser, including in Flash storage, ETags, and HSTS ﬂags.
The changes we’re making in Firefox 85 greatly reduce the effectiveness of cache-based supercookies by eliminating a tracker’s ability to use them across websites.
Like all web browsers, Firefox shares some internal resources between websites to reduce overhead. Firefox’s image cache is a good example: if the same image is embedded on multiple websites, Firefox will load the image from the network during a visit to the ﬁrst website and on subsequent websites would traditionally load the image from the browser’s local image cache (rather than reloading from the network). Similarly, Firefox would reuse a single network connection when loading resources from the same party embedded on multiple websites. These techniques are intended to save a user bandwidth and time.
Unfortunately, some trackers have found ways to abuse these shared resources to follow users around the web. In the case of Firefox’s image cache, a tracker can create a supercookie by “encoding” an identiﬁer for the user in a cached image on one website, and then “retrieving” that identiﬁer on a different website by embedding the same image. To prevent this possibility, Firefox 85 uses a different image cache for every website a user visits. That means we still load cached images when a user revisits the same site, but we don’t share those caches across sites.
In fact, there are many different caches trackers can abuse to build supercookies. Firefox 85 partitions all of the following caches by the top-level site being visited: HTTP cache, image cache, favicon cache, HSTS cache, OCSP cache, style sheet cache, font cache, DNS cache, HTTP Authentication cache, Alt-Svc cache, and TLS certiﬁcate cache.
To further protect users from connection-based tracking, Firefox 85 also partitions pooled connections, prefetch connections, preconnect connections, speculative connections, and TLS session identiﬁers.
This partitioning applies to all third-party resources embedded on a website, regardless of whether Firefox considers that resource to have loaded from a tracking domain. Our metrics show a very modest impact on page load time: between a 0.09% and 0.75% increase at the 80th percentile and below, and a maximum increase of 1.32% at the 85th percentile. These impacts are similar to those reported by the Chrome team for similar cache protections they are planning to roll out.
Systematic network partitioning makes it harder for trackers to circumvent Firefox’s anti-tracking features, but we still have more work to do to continue to strengthen our protections. Stay tuned for more privacy protections in the coming months!
Re-architecting how Firefox handles network connections and caches was no small task, and would not have been possible without the tireless work of our engineering team: Andrea Marchesini, Tim Huang, Gary Chen, Johann Hofmann, Tanvi Vyas, Anne van Kesteren, Ethan Tseng, Prangya Basu, Wennie Leung, Ehsan Akhgari, and Dimi Lee.
We wish to express our gratitude to the many Mozillians who contributed to and supported this work, including: Selena Deckelmann, Mikal Lewis, Tom Ritter, Eric Rescorla, Olli Pettay, Kim Moir, Gregory Mierzwinski, Doug Thayer, and Vicky Chin.
We also want to acknowledge past and ongoing efforts carried out by colleagues in the Brave, Chrome, Safari and Tor Browser teams to combat supercookies in their own browsers.
We’d like to extend a special thank you to all of the new Mozillians who contributed to this release of Firefox.
At Mozilla, we believe you have a right to privacy. You shouldn’t be tracked online. Whether you are checking your bank balance, looking for the best doctor, or shopping for shoes, unscrupulous tracking companies should not be able to track you as you browse the Web. For that reason, we are continuously working to harden Firefox against online tracking of our users.
This site features a curriculum developed around the television series, Halt and Catch Fire (2014-2017), a ﬁctional narrative about people working in tech during the 1980s-1990s.
The intent is for this website to be used by self-forming small groups that want to create a “watching club” (like a book club) and discuss aspects of technology history that are featured in this series.
There are 15 classes, for a “semester-long” course:
~ #01 ~ #02 ~ #03 ~ #04 ~ #05 ~ #06 ~ #07 ~ #08 ~ #09 ~ #10 ~ #11 ~ #12 ~ #13 ~ #14 ~ #15 ~
* Apéritifs Casual viewing presented before gathering. This is entertainment; not required viewing.
* RFC as koan A Request for Comments from the Internet Engineering Task Force, for reﬂecting on.
* Emulation as koan An emulated computer in the browser, also for reﬂection.
* Themes Recommendations for topics to be discussed.
* Readings Related material for deeper thinking on the class topic.
* Description Brief summary of what’s going on in the episodes and how it relates to tech history at large / the weekly topic.
* Episode summaries A link to summaries of the episodes that should be watched prior to meeting as a group. Watching each episode is not required; if time doesn’t allow, refer to the summaries. Content warnings are provided for relevant episodes. If there are speciﬁc concerns, this can determine which episodes should be skipped or anticipated before viewing.
Curriculum and website designed by Ashley Blewer.
see also ↠ source code & site metadata
ServicesPostgreSQL administrationPostgreSQL clustering and HAPostgreSQLSolutions — Who uses PostgreSQL
So, you’re building the next unicorn startup and are thinking feverishly about a future-proof PostgreSQL architecture to house your bytes? My advice here, having seen dozens of hopelessly over-engineered / oversized solutions as a database consultant over the last 5 years, is short and blunt: Don’t overthink, and keep it simple on the database side! Instead of getting fancy with the database, focus on your application. Turn your microscope to the database only when the need actually arises, m’kay! When that day comes, ﬁrst of all, try all the common vertical scale-up approaches and tricks. Try to avoid using derivative Postgres products, or employing distributed approaches, or home-brewed sharding at all costs — until you have, say, less than 1 year of breathing room available. Wow, what kind of advice is that for 2021? I’m talking about a simple, single-node approach in the age of Big Data and hyper-scalability…I surely must be a Luddite or just still dizzy from too much New Year’s Eve champagne. Well, perhaps so, but let’s start from a bit further back…PostgreSQL and MySQL — brothers from another motherOver the holidays, I ﬁnally had a bit of time to catch up on my tech reading / watching TODO-list (still dozens of items left though, arghh)…and one pretty good talk was on the past and current state of distributed MySQL architectures by Peter Zaitsev of Percona. Oh, MySQL??? No no, we haven’t changed “horses” suddenly, PostgreSQL is still our main focus 🙂 It’s just that in many key points pertaining to scaling, the same constraints actually also apply to PostgreSQL. After all, they’re both designed as single-node relational database management engines.In short, I’m summarizing some ideas out of the talk, plus adding some of my own. I would like to provide some food for thought to those who are overly worried about database performance — thus prematurely latching onto some overly complex architectures. In doing so, the “worriers” sacriﬁce some other good properties of single-node databases — like usability, and being bullet-proof.All distributed systems are inherently complex, and difﬁcult to get rightIf you’re new to this realm, just trust me on the above, OK? There’s a bunch of abandoned or stale projects which have tried to offer some fully or semi-automatically scalable, highly available, easy-to-use and easy-to-manage DBMS…and failed! It’s not an utterly bad thing to try though, since we can learn from it. Actually, some products are getting pretty close to the Holy Grail of distributed SQL databases (CockroachDB comes to mind ﬁrst). However, I’m afraid we still have to live with the CAP theorem. Also, remember that to go from covering 99.9% of corner cases of complex architectures to covering 99.99% is not a matter of linear complexity/cost, but rather exponential complexity/cost!Although after a certain amount of time a company like Facebook surely needs some kind of horizontal scaling, maybe you’re not there yet, and maybe stock Postgres can still provide you some years of stress-free cohabitation. Consider: Do you even have a runway for that long?* A single PostgreSQL instance can easily do hundreds of thousands of transactions per secondFor example, on my (pretty average) workstation, I can do ca. 25k simple read transactions per 1 CPU core on an “in memory” pgbench dataset…with the default conﬁg for Postgres v13! With some tuning (by the way, tuning reads is much harder in Postgres than tuning writes!) I was able to increase it to ~32k TPS per core, meaning: a top-notch, dedicated hardware server can do about 1 million short reads! With reads, you can also usually employ replicas — so multiply that by 10 if needed! You then need to somehow solve the query routing problem, but there are tools for that. In some cases, the new standard LibPQ connection string syntax (target_session_attrs) can be used — with some shufﬂing. By the way, Postgres doesn’t limit the number of replicas, though I personally have never witnessed more than 10 replicas. With some cascading, I’m sure you could run dozens without bigger issues.* A single node can typically do tens of thousands of write transactions per secondOn my humble workstation with 6 cores (12 logical CPUs) and NVMe SSD storage, the default very write-heavy (3 UPD, 1 INS, 1 SEL) “pgbench” test greets me with a number of around 45k TPS — for example, after some checkpoint tuning — and there are even more tuning tricks available.* A single Postgres instance can easily handle dozens of Terabytes of dataGiven that you have separated “hot” and “cold” data sets, and there’s some thought put into indexing, etc., a single Postgres instance can cope with quite a lot of data. Backups and standby server provisioning, etc. will be a pain, since you’ll surely meet some physical limits even on the ﬁnest hardware. However, these issues are common to all database systems. From the query performance side, there is no reason why it should suddenly be forced to slow down!* A single node instance is literally bullet-proof as far as data consistency is concernedGiven that 1) you declare your constraints correctly, 2) don’t fool around with some “fsync” or asynchronous commit settings, and 3) your disks don’t explode, a single node instance provides rock-solid data consistency. Again, the last item applies to any data storage, so hopefully, you have “some” backups somewhere…* Failures are easily comprehensible — thus also recoverableMeaning: that even if something very bad happens and the primary node is down, the worst outcome is that your application is just currently unavailable. Once you do your recovery magic (or better, let some bot like Patroni take care of that) you’re exactly where you were previously. Now compare that with some partial failure scenarios or data hashing errors in a distributed world! Believe me, when working with critical data, in a lot of cases it’s better to have a short downtime than to have to sort out some runaway datasets for days or weeks to come, which is confusing for yourself and your customers.Tips to be prepared for scalingIn the beginning of the post, I said that when starting out, you shouldn’t worry too much about scaling from the architectural side. That doesn’t mean that you should ignore some common best practices, in case scaling could theoretically be required later. Some of them might be:* Don’t be afraid to run your own databaseThis might be the most important thing on the list — with modern real hardware (or some metal cloud instances) and the full power of conﬁg and ﬁlesystem tuning and extensions, you’ll typically do just ﬁne on a single node for years. Remember that if you get tired of running your own setup, nowadays you can always migrate to some cloud providers — with minimal downtime — via Logical Replication! If you want to know how, see here. Note that I specifically mentioned “real” hardware above, due to the common misconception that a single cloud vCPU is pretty much equal to a real one…the reality is far from that of course — my own impression over the years has been that there is around a 2-3x performance difference, depending on the provider/region/luck factor in question.* Try to avoid the serious mistake of having your data “architecture” centered around a single huge tableYou’d be surprised how often we see that…so slice and dice early, or set up some partitioning. Partitioning will also do a lot of good to the long-term health of the database, since it allows multiple autovacuum workers on the same logical table, and it can speed up IO considerably on enterprise storage. If IO indeed becomes a bottleneck at some point, you can employ Postgres native remote partitions, so that some older data lives on another node.* Make sure to “bake in” a proper sharding key for your tables/databases Initially, the data can just reside on a single physical node. If your data model revolves around the “millions of independent clients” concept for example, then it might even be best to start with many “sharded” databases with identical schemas, so that transferring out the shards to separate hardware nodes will be a piece of cake in the future.There are beneﬁts to systems that can scale 1000x from day one…but in many cases, there’s also an unreasonable (and costly) desire to be ready for scaling. I get it, it’s very human — I’m also tempted to buy a nice BMW convertible with a maximum speed of 250 kilometers per hour…only to discover that the maximum allowed speed in my country is 110, and even that during the summer months.The thing that resonated with me from the Youtube talk the most was that there’s a definite downside to such theoretical scaling capability — it throttles development velocity and operational management efﬁciency at early stages! Having a plain rock-solid database that you know well, and which also actually performs well — if you know how to use it — is most often a great place to start with.By the way, here’s another good link on a similar note from a nice Github collection and also one pretty detailed overview here about how an Alexa top 250 company managed to get by with a single database for 12 years before needing drastic scaling action!To sum it all up: this is probably a good place to quote the classics: premature optimization is the root of all evil…
In 2020, Backblaze added 39,792 hard drives and as of December 31, 2020 we had 165,530 drives under management. Of that number, there were 3,000 boot drives and 162,530 data drives. We will discuss the boot drives later in this report, but ﬁrst we’ll focus on the hard drive failure rates for the data drive models in operation in our data centers as of the end of December. In addition, we’ll welcome back Western Digital to the farm and get a look at our nascent 16TB and 18TB drives. Along the way, we’ll share observations and insights on the data presented and as always, we look forward to you doing the same in the comments.
At the end of 2020, Backblaze was monitoring 162,530 hard drives used to store data. For our evaluation, we remove from consideration 231 drives which were used for testing purposes and those drive models for which we did not have at least 60 drives. This leaves us with 162,299 hard drives in 2020, as listed below.
The 231 drives not included in the list above were either used for testing or did not have at least 60 drives of the same model at any time during the year. The data for all drives, data drives, boot drives, etc., is available for download on the Hard Drive Test Data webpage.
For drives which have less than 250,000 drive days, any conclusions about drive failure rates are not justiﬁed. There is not enough data over the year-long period to reach any conclusions. We present the models with less than 250,000 drive days for completeness only.
For drive models with over 250,000 drive days over the course of 2020, the Seagate 6TB drive (model: ST6000DX000) leads the way with a 0.23% annualized failure rate (AFR). This model was also the oldest, in average age, of all the drives listed. The 6TB Seagate model was followed closely by the perennial contenders from HGST: the 4TB drive (model: HMS5C4040ALE640) at 0.27%, the 4TB drive (model: HMS5C4040BLE640), at 0.27%, the 8TB drive (model: HUH728080ALE600) at 0.29%, and the 12TB drive (model: HUH721212ALE600) at 0.31%.
The AFR for 2020 for all drive models was 0.93%, which was less than half the AFR for 2019. We’ll discuss that later in this report.
We had a goal at the beginning of 2020 to diversify the number of drive models we qualiﬁed for use in our data centers. To that end, we qualiﬁed nine new drives models during the year, as shown below.
Actually, there were two additional hard drive models which were new to our farm in 2020: the 16TB Seagate drive (model: ST16000NM005G) with 26 drives, and the 16TB Toshiba drive (model: MG08ACA16TA) with 40 drives. Each fell below our 60-drive threshold and were not listed.
The goal of qualifying additional drive models proved to be prophetic in 2020, as the effects of Covid-19 began to creep into the world economy in March 2020. By that time we were well on our way towards our goal and while being less of a creative solution than drive farming, drive model diversiﬁcation was one of the tactics we used to manage our supply chain through the manufacturing and shipping delays prevalent in the ﬁrst several months of the pandemic.
The last time a Western Digital (WDC) drive model was listed in our report was Q2 2019. There are still three 6TB WDC drives in service and 261 WDC boot drives, but neither are listed in our reports, so no WDC drives—until now. In Q4 a total of 6,002 of these 14TB drives (model: WUH721414ALE6L4) were installed and were operational as of December 31st.
These drives obviously share their lineage with the HGST drives, but they report their manufacturer as WDC versus HGST. The model numbers are similar with the ﬁrst three characters changing from HUH to WUH and the last three characters changing from 604, for example, to 6L4. We don’t know the significance of that change, perhaps it is the factory location, a ﬁrmware version, or some other designation. If you know, let everyone know in the comments. As with all of the major drive manufacturers, the model number carries patterned information relating to each drive model and is not randomly generated, so the 6L4 string would appear to mean something useful.
WDC is back with a splash, as the AFR for this drive model is just 0.16%—that’s with 6,002 drives installed, but only for 1.7 months on average. Still, with only one failure during that time, they are off to a great start. We are looking forward to seeing how they perform over the coming months.
There are six Seagate drive models that were new to our farm in 2020. Five of these models are listed in the table above and one model had only 26 drives, so it was not listed. These drives ranged in size from 12TB to 18TB and were used for both migration replacements as well as new storage. As a group, they totaled 13,596 drives and amassed 1,783,166 drive days with just 46 failures for an AFR of 0.94%.
The new Toshiba 14TB drive (model: MG07ACA14TA) and the new Toshiba 16TB (model: MG08ACA16TEY) were introduced to our data centers in 2020 and they are putting up zeros, as in zero failures. While each drive model has only been installed for about two months, they are off to a great start.
The chart below compares the AFR for each of the last three years. The data for each year is inclusive of that year only and for the drive models present at the end of each year.
The AFR for 2020 dropped below 1% down to 0.93%. In 2019, it stood at 1.89%. That’s over a 50% drop year over year. So why was the 2020 AFR so low? The answer: It was a group effort. To start, the older drives: 4TB, 6TB, 8TB, and 10TB drives as a group were significantly better in 2020, decreasing from a 1.35% AFR in 2019 to a 0.96% AFR in 2020. At the other end of the size spectrum, we added over 30,000 larger drives: 14TB, 16TB, and 18TB, which as a group recorded an AFR of 0.89% for 2020. Finally, the 12TB drives as a group had a 2020 AFR of 0.98%. In other words, whether a drive was old or new, or big or small, they performed well in our environment in 2020.
The chart below shows the lifetime annualized failure rates of all of the drives models in production as of December 31, 2020.
Conﬁdence intervals give you a sense of the usefulness of the corresponding AFR value. A narrow conﬁdence interval range is better than a wider range, with a very wide range meaning the corresponding AFR value is not statistically useful. For example, the conﬁdence interval for the 18TB Seagate drives (model: ST18000NM000J) ranges from 1.5% to 45.8%. This is very wide and one should conclude that the corresponding 12.54% AFR is not a true measure of the failure rate of this drive model. More data is needed. On the other hand, when we look at the 14TB Toshiba drive (model: MG07ACA14TA), the range is from 0.7% to 1.1% which is fairly narrow, and our conﬁdence in the 0.9% AFR is much more reasonable.
We always exclude boot drives from our reports as their function is very different from a data drive. While it may not seem obvious, having 3,000 boot drives is a bit of a milestone. It means we have 3,000 Backblaze Storage Pods in operation as of December 31st. All of these Storage Pods are organized into Backblaze Vaults of 20 Storage Pods each or 150 Backblaze Vaults.
Over the last year or so, we moved from using hard drives to SSDs as boot drives. We have a little over 1,200 SSDs acting as boot drives today. We are validating the SMART and failure data we are collecting on these SSD boot drives. We’ll keep you posted if we have anything worth publishing.
The complete data set used to create the information used in this review is available on our Hard Drive Test Data page. You can download and use this data for free for your own purpose. All we ask are three things: 1) you cite Backblaze as the source if you use the data, 2) you accept that you are solely responsible for how you use the data, and 3) you do not sell this data to anyone; it is free.
If you just want the summarized data used to create the tables and charts in this blog post you can download the ZIP ﬁle containing the CSV ﬁles for each chart.
Good luck and let us know if you ﬁnd anything interesting.
n a move that could transform HIV treatment, the Food and Drug Administration has approved a monthly injectable medication, a regimen designed to rival pills that must be taken daily.
The newly approved medicine, which is called Cabenuva, represents a significant advance in treating what continues to be a highly infectious disease. In 2018, for instance, there were approximately 36,400 newly infected patients living with HIV in the U. S., according to the Centers for Disease Control and Prevention. About 1.7 million people worldwide became newly infected in 2019, according to UNAIDS.
Although several medicines exist for treating HIV, ViiV Healthcare is banking on the improved convenience of getting a monthly shot, even if it must be administered by a health care provider. The company, which is largely controlled by GlaxoSmithKline (GSK), gathered data showing nine of 10 patients in pivotal studies claimed to prefer the shot over taking pills each day.
“This approval will allow some patients the option of receiving once-monthly injections in lieu of a daily oral treatment regimen,” said John Farley, who heads the Ofﬁce of Infectious Diseases in the FDA’s Center for Drug Evaluation and Research, in a statement. “Having this treatment available for some patients provides an alternative for managing this chronic condition.”
Two clinical studies of more than 1,100 patients from 16 countries found that Cabenuva was as effective in suppressing the virus as the daily, oral, three-drug regimens that were taken by patients throughout the 48-week study period. However, patients must ﬁrst take an oral version of the injectable medicine and another pill for the ﬁrst month before pivoting to monthly shots, according to the FDA.
The cost, however, is steep — the list, or wholesale, price is $3,960 a month, or more than $47,500 a year. The list price for the one-time initiation dose is $5,490. However, a ViiV spokeswoman explained that a 30-day oral “lead-in,” which is required as part of the approval, will be made available at no charge to patients. She also maintained the list price for the monthly shot is “within the range” of HIV treatment pills on the market today.
There is also considerable competition from Gilead Sciences (GILD), which markets several HIV medicines. During 2019, its HIV treatment franchise generated $17.2 billion in revenue, a 12.5% year-over-year increase, and grabbed more than 80% of the market for patients starting HIV therapy, according to Cowen analyst Phil Nadeau. He expects HIV medication sales to hit $24.4 billion in 2025, but points to HIV prevention pills as big drivers of that increase.
ViiV, meanwhile, is developing an every-other-month injectable in hopes of capturing a more substantial market share.
Last November, an interim analysis found cabotegravir — a component of Cabenuva — was 89% more effective in preventing infection among women than Truvada, a Gilead pill that must be taken daily and is the current standard of care. And a separate analysis released in July showed the every-other-month shot protected uninfected people by 66% more compared with Truvada in at-risk men and transgender women who have sex with men.
“We see Cabenuva as the beginning of long-acting treatment for HIV,” said Kimberly Smith, who heads global research and medical strategy at ViiV, who expects to seek FDA approval for this version of the shot in coming weeks. “We’re opening the door with Cabenuva and will only create more hunger for other long-acting therapies. It really becomes a sort of anchor.”
The approval, by the way, comes more than a year after ViiV hoped to win approval for the treatment. In late 2019, the FDA issued a so-called complete response letter and explained approval could not be granted at the time due to issues with chemistry, manufacturing and controls, which are used to determine safety, effectiveness and quality.
In January 2020, the Norwegian Consumer Council and the European privacy NGO noyb.eu ﬁled against Grindr and several adtech companies over illegal sharing of users’ data. Like many other apps, Grindr shared personal data (like location data or the fact that someone uses Grindr) to potentially hundreds of third parties for advertisment.
Today, the Norwegian Data Protection Authority upheld the complaints, conﬁrming that Grindr did not recive valid consent from users in an advance notiﬁcation. The Authority imposes a ﬁne of 100 Mio NOK (€ 9.63 Mio or $ 11.69 Mio) on Grindr. An enormous ﬁne, as Grindr only reported a proﬁt of $ 31 Mio in 2019 - a third of which is now gone.
ﬁled three strategic GDPR complaints in cooperation with noyb. The complaints were ﬁled with the Norwegian Data Protection Authority (DPA) against the gay dating app Grindrand ﬁve adtech companies that were receiving personal data through the app: Twitter`s MoPub, AT&T’s AppNexus (now Xandr
Grindr was directly and indirectly sending highly personal data to potentially hundreds of advertising partners. by the NCC described in detail how a large number of third parties constantly receive personal data about Grindr’s a user opens Grindr, information like the current location, or the fact that a person uses Grindr is broadcasted to advertisers. This information is also used to create comprehensive proﬁles about users, which can be used for targeted advertising and other purposes.
Consent must also be freely given. The DPA highlighted that users should have a real choice not to consent without any negative consequences. Grindr made use of the app conditional on consenting to data sharing or to paying a subscription fee.
“The message is simple: ‘take it or leave it’ is not consent. If you rely on unlawful ‘consent’ you are subject to a hefty ﬁne. This does not only concern Grindr, but many websites and apps.” — Ala Krinickytė, Data protection lawyer at noyb
This not only sets limits for Grindr, but establishes strict legal requirements on a whole industry that profits from collecting and sharing information about our preferences, location, purchases, physical and mental health, sexual orientation, and political viewsFinn Myrstad, Director of digital policy in the Norwegian Consumer Council (NCC).
Grindr must police external “Partners”. Moreover, the Norwegian DPA concluded that “Grindr failed to control and take responsibility” for their data sharing with third parties. Grindr shared data with potentially hundreds of thrid parties, by including tracking codes into its app. It then blindly trusted these adtech companies to comply with an ‘opt-out’ signal that is sent to the recipients of the data. The DPA noted that companies could easily ignore the signal and continue to process personal data of users. The lack of any factual control and responsibility over the sharing of users’ data from Grindr is not in line with the accountability principle of Article 5(2) GDPR. Many companies in the industry use such signal, mainly the TCF framework by the Interactive Advertising Bureau (IAB).
“Companies cannot just include external software into their products and then hope that they comply with the law. Grindr included the tracking code of external partners and forwarded user data to potentially hundreds of third parties - it now also has to ensure that these ‘partners’ comply with the law.”
Grindr: Users may be “bi-curious”, but not gay? The GDPR specially protects information about sexual orientation. Grindr however took the view, that such protections do not apply to its users, as the use of Grindr would not reveal the sexual orientation of its customers. The company argued that users may be straight or “bi-curious” and still use the app. The Norwegian DPA did not buy this argument from an app that identiﬁes itself as being ‘exclusively for the gay/bi community’. The additional questionable argument by Grindr that users made their sexual orientation “manifestly public” and it is therefore not protected was equally rejected by the DPA.
“An app for the gay community, that argues that the special protections for exactly that community actually do not apply to them, is rather remarkable. I am not sure if Grindr’s lawyers have really thought this through.”
Successful objection unlikely. The Norwegian DPA issued an “advanced notice” after hearing Grindr in a procedure. Grindr can still object to the decision within 21 days, which will be reviewed by the DPA. However it is unlikely that the outcome could be changed in any material way. However further ﬁnes may be upcoming as Grindr is now relying on a new consent system and alleged “legitimate interest” to use data without user consent. This is in conﬂict with the decision of the Norwegian DPA, as it explicitly held that “any extensive disclosure … for marketing purposes should be based on the data subject’s consent”.
“The case is clear from the factual and legal side. We do not expect any successful objection by Grindr. However, more ﬁnes may be in the pipeline for Grindr as it lately claims an unlawful ‘legitimate interest’ to share user data with third parties - even without consent. Grindr may be bound for a second round.”
* The project was led by the Norwegian Consumer Council
* The technical tests were carried out by the security company mnemonic.
* The research on the adtech industry and speciﬁc data brokers was performed with assistance from the researcher Wolﬁe Christl of Cracked Labs.
* Additional auditing of the Grindr app was performed by the researcher Zach Edwards of MetaX.
* The legal analysis and formal complaints were written with assistance from noyb.
Over the past several weeks, GameStop stock has traded more like a cryptocurrency than a failing mall-based retailer.
What is going on here?
In one sentence: Institutional investors short GameStop (i.e., the prevailing wisdom, at least until the past few weeks) are playing a game of chicken with retail investors & contrarian institutions who are long. 
GameStop is a video game retailer; it has been in decline for several years now. Video games have moved to an online, direct-to-consumer distribution model. Foot trafﬁc in malls (where most GameStops are located) was down even before COVID; many mall-based retailers are struggling.
Unsurprisingly, over the course of 2020 this led to GameStop becoming one of the most shorted stocks on Wall Street.
However, there were early signs that GameStop was undervalued. Michael Burry (of The Big Short fame) took a large long position in 2019, claiming video game discs are not entirely dead. In August 2020, Roaring Kitty (a.k.a. u/DeepFuckingValue on Reddit) published a video detailing why GameStop was a good play based on its fundamentals — a future short squeeze would just be the icing on the cake.
On January 11th, Ryan Cohen (founder of Chewy, which sold to PetSmart for $3.35 billion), joined GameStop’s board after his investment ﬁrm built up a 10% stake in the company. At this point, retail investors, especially those on the popular subreddit Wall Street Bets went crazy. They highlighted that GameStop was now a growth play. It is now led by a previously successful founder, its online business is growing at a 300% rate, and it is in the process of turning around its core business.  As such, GameStop should be valued at a Venture Capital multiple of 10x+ revenue, rather than a measly 0.5x revenue.
This narrative is compelling. Despite short sellers warning otherwise, GameStop has continued to climb in price. All of the GameStop options issued (with a high strike price of $60) were in the money on Friday (1/22/2021), trigging a gamma squeeze as institutions who had written the options rushed to cover their positions. GameStop closed Friday at $65.01.
On Monday (1/25/2021), GameStop opened at $96.73, spiked at $159.18 (likely because of another gamma squeeze),  then crashed with pressure from institutional shorts, closing at $76.79 (still up 18% day-over-day).
But the bulls aren’t ﬁnished with GameStop. These gamma squeezes are nothing compared to what will be coming: the near-mythical “Inﬁnity Squeeze”. Most famously seen with Volkswagen in 2008, when short sellers are forced to cover their positions due to a margin call, the price of the stock rapidly rises (hypothetically to inﬁnity) since the number of shares shorted exceeds the number of shares available to buy.
Can a subreddit comprised of retail investors really move the market like this? I doubt it — all of the big swings in this stock have been caused by institutions. What this subreddit does is control the narrative.
First unveiled to the mainstream ﬁnance world in a February 2020 Bloomberg article, Wall Street Bets is profane (as I’m sure you’ve noticed if you clicked any of the links in this post). But Wall Street Bets isn’t some sinister, market-manipulating entity. Rather, it is a virtual water cooler for individual retail investors to post memes — and emojis, oh so many emojis — about their investments.
Reading Wall Street Bets feels like the discussion at a middle school cafeteria table circa 2000. Redditors on Wall Street Bets (who refer to themselves affectionately as “autists” or “retards”)  encourage one another to have “diamond hands” (💎 🤲), the will to stay strong and not sell a stock when things are going poorly. Contrast this with the “paper hands” (🧻 🤲) of those who are weak-willed and sell a stock based on market sentiment. Companies are headed “to the moon” (🚀). Bears are not mentioned without the adjective “gay” (🌈 🐻). Self-deprecating cuckold references to “my wife’s boyfriend” abound.
Despite this language (or perhaps because of it), Wall Street Bets is one of the most entertaining and informative places on the internet. People post meaningful analysis of companies that are undervalued and why they are investing. Browsing the subreddit, you get a crash course on concepts that you would otherwise learn only at a buy-side ﬁrm or working as an options trader: EBITDA multiple, book value, delta hedging, implied volatility.
But the most compelling aspect of Wall Street Bets is in its name: the bets. The ability to gain (or lose) a life-changing amount of money — with screenshots to prove it — creates an environment similar to that of the casino ﬂoor. And if Wall Street Bets is the casino ﬂoor, then Wall Street itself is the house.
The same emotion that caused us to root for the thieves in Ocean’s 11 is what makes Wall Street Bets so enticing. Put frankly, Millennials are tired of getting fucked by the man. When you’re underemployed with $100,000 in student loan debt, your ﬁnancial situation feels overwhelming. You really don’t want to take the advice of your parents or CNBC talking heads  to invest 10% of your salary for a 4% annual return. At that point, what’s another $5,000? Might as well buy some short-dated GME calls.
For those of use who don’t ﬁt the underemployed Millennial archetype, Matt Levine’s Boredom Markets Hypothesis applies. COVID has required us to work from home, without much ability to spend for travel, dining, or entertainment. Putting money into Robinhood is a decent substitute — with the added bonus of it being an “investment”, rather than consumption. In an age where the Fed will print seemingly unlimited money to prop up capital markets, better to be irrational exuberant as a part of the market than be left out of the party.
Further, the narrative presented by the GameStop trade in particular is compelling. It allows the small retail investor to play a role in market events normally only played out at the hedge fund scale (a short squeeze was a key plot element in Season 1 of Billions). The short sellers in this case aren’t particularly sympathetic: Andrew Left of Citron Research released a video in which he lays out the bear case for GameStop. His main argument was a smug appeal to authority, essentially claiming “Wall Street knows better than you people on message boards”. 
So sure, Wall Street Bets is irreverent, has irrational exuberance, and is guilty of hero worship (Elon Musk and more recently Ryan Cohen). But it also provides a sense of community during the stresses of COVID and provides a compelling way for the little guy to stick it to the man.
As Keynes reminded us (in the most overused ﬁnance quote of all time): “The markets can remain irrational longer than you can remain solvent.” When enough people believe in a vision, it can cause that vision to manifest itself. Wall Street is scared that retail investors can manifest their own vision, rather that the one dictated by the major ﬁnancial players.
The entire GameStop scenario is a case study in reﬂexivity.  Reﬂexivity is the idea that our perception of circumstances inﬂuences reality, which then further impacts our perception of reality, in a self-reinforcing loop. Speciﬁcally, in a ﬁnancial market, prices are a reﬂection of traders’ expectations. Those prices then inﬂuence traders’ expectations, and so on.
This may seem obvious to some, but it ﬂies in the face of the efﬁcient-market hypothesis. As Soros states,
What makes reﬂexivity interesting is that the prevailing bias has ways, via the market prices, to affect the so-called fundamentals that market prices are supposed to reﬂect. 
What does this mean for GameStop? Because of traders’ bullish sentiment, a previously failing company is now in the position where it can leverage the overnight increase in value to make real, substantive changes to its business. GameStop can pay off debt through the issuance of new shares or make strategic acquisitions using its newly-valuable shares.  A struggling company could become solid simply not because of a change in the underlying business, but because investors decided it should be more valuable.
Reﬂexivity may be the best way to understand the 21st Century. Passive investing is an example of reﬂexivity in action.  So is winner-take-all venture investing. Uber raised an absurd war chest, causing more investors to want to pile in, which led to more fundraising and eventually a successful IPO. The fact that Uber has not yet turned a proﬁt, yet today has a $100 billion market cap, cannot be explained with traditional ﬁnancial thinking, but can be explained by reﬂexivity.
The internet and instant communication only accelerates these trends. Instances of reﬂexivity like the strange market movements we’ve seen with GameStop are happening more and more — not only in ﬁnancial markets, but also in the political and social realm, to incredible effect.
When Donald Trump won the presidency in 2016, I distinctly remember writing in my journal: “Anything is possible.” I was blown away that this complete buffoon of a man, someone who the Hufﬁngton Post refused to cover as politics meme-d his way into the presidency. He was a joke, until suddenly, in a Tulpa-esque twist…he wasn’t. Similarly, internet conspiracy theories spread via Facebook memes manifested themselves in the real world when Trump supporters stormed the Capitol a few weeks ago.
Our perception shapes reality. And when enough people agree on a speciﬁc perception, it becomes reality.  As we become more and more connected, discourse will expand and and accelerate. We’re going to see some strange things become reality.
Even, perhaps, hedge funds going bankrupt and newly-minted millionaires, all because of some people who wrote about a struggling video game retailer on Reddit.
Retail investors basically just shut a hedge fund down.
Citadel and Point72 are investing (backstopping) $2.75 billion into Melvin Capital who was superman short $GME GameStop
Melvin down over 30% in 2021
Melvin cap is run by Gabe Plotkin a Steve Cohen SAC protege— Will Meade (@realwillmeade) January 25, 2021
 For the best summary of the current situation, see Matt Levine.
 When you take into account the closures of poorly-performing stores, per-store revenue and profits are up.
 Options were written up to a strike price of $115 and these all were in the money.
 Not condoning the language, but Wall Street Bets members with trading gains often make donations to these causes.
 Wall Street Bets has a love/hate (mostly hate) relationship with Jim Cramer, a.k.a. “Chillman Boomer”.
 Andrew Left is an interesting character. That said, I’m not here to attack him personally, and nobody in their right mind would condone the alleged threats made against him by GameStop bulls upset by his stance on the company.
 Good intro to Soros’s Theory of Reﬂexivity in this Financial Times article.
 More details in this Reddit post.
 Passive investing is also helping GameStop’s run — as the price of the stock increases, index funds need to buy more shares to re-weight, which in turn drives up the price. Reﬂexivity.
 This is my favorite rebuttal for those who claim “cryptocurrency has no intrinsic value”. Sure — but neither does the U. S. dollar. We just all decided that it would have value, so it does.
On 8 January 2021 at 14:05 CET the synchronous area of Continental Europe was separated into two parts due to outages of several transmission network elements in a very short time. ENTSO-E has published the ﬁrst information on the event already on 8 January 2021, followed by an update with geographical view and time sequence on 15 January 2021. Since then, ENTSO-E has analysed a large portion of relevant data aiming to reconstruct the event in detail.
This second update presents the key ﬁndings of detailed analyses, which have a preliminary character subject to new facts, which will emerge in the still ongoing investigation.
The analysed sequence of events concludes that the initial event was the tripping of a 400 kV busbar coupler in the substation Ernestinovo (Croatia) by overcurrent protection at 14:04:25.9. This resulted in a decoupling of the two busbars in the Ernestinovo substation, which in turn separated North-West and south-east electric power ﬂows in this substation. As shown in Figure 1 below, North-West bound lines which remained connected to one busbar, connect Ernestinovo to Zerjavinec (Croatia) and Pecs (Hungary), while South-East bound lines which remained connected to another busbar, connect Ernestinovo to Ugljevik (Bosnia-Herzegovina) and Sremska Mitrovica (Serbia).
Figure 1 - Decoupling of two busbars in Ernestinovo
The separation of ﬂows in the Ernestinovo substation, lead to the shifting of electric power ﬂows to neighbouring lines which were subsequently overloaded. At 14:04:48.9, the line Subotica — Novi Sad (Serbia) tripped due to overcurrent protection. This was followed by the further tripping of lines due to distance protection, as shown in Figure 2, below, leading eventually to the system separation into two parts at 14:05:08.6.
Figure 2 - Tripping of additional transmission network elements after the decoupling of two busbars in Ernestinovo
The route where the two parts of the Continental Europe Synchronous Area were separated is shown in Figure 3 below:
The system separation resulted in a deﬁcit of power (approx. -6.3 GW) in the North-West Area and a surplus of power (approx. +6.3 GW) in the South-East Area, resulting in turn in a frequency decrease in the North-West Area and a frequency increase in the South-East Area.
At approximately 14:05 CET, the frequency in the North-West Area initially decreased to a value of 49.74 Hz within a period of around 15 seconds before quickly reaching a steady state value of approximately 49.84 Hz. At the same time, the frequency in the South-East Area initially increased up to 50.6 Hz before settling at a steady state frequency between 50.2 Hz and 50.3 Hz as illustrated in Figure 4 below:
Figure 4 - Frequency in Continental Europe during the event on 8 January 2021 right after the disturbance and during resynchronisation
Due to the low frequency in the North-West Area, contracted interruptible services in France and Italy (in total around 1.7 GW) were disconnected in order to reduce the frequency deviation. These services are provided by large customers who are contracted by the respective Transmission System Operators (TSOs) to be disconnected if frequency drops under a certain threshold. In addition, 420 MW and 60 MW of supportive power were automatically activated from the Nordic and Great Britain synchronous areas respectively. These countermeasures ensured that already at 14:09 CET the frequency deviation from the nominal value of 50 Hz was reduced to around 0.1 Hz in the North-West area (Figure 4).
In order to reduce the high frequency in the South-East Area, automatic and manual countermeasures were activated, including the reduction of generation output (Eg. automatic disconnection of a 975 MW generator in Turkey at 14:04:57). As a consequence, the frequency in the South-East Area returned to 50.2 Hz at 14:29 CET and remained within control limits (49.8 and 50.2 Hz) until the resynchronisation of the two separated areas took place at 15:07:31.6 CET.
Between 14:30 CET and 15:06 CET the frequency in the South-East area was ﬂuctuating between 49.9 Hz and 50.2 Hz due to the rather small size of the South-East Area where also several production units were disconnected (Figure 5). During this period, the frequency in the North-West Area ﬂuctuated far less and remained close to the nominal value, due to the rather large size of the North-West Area. This frequency behaviour is a subject of further detailed investigation.
Figure 5 - Frequency in Continental Europe during the event on 8 January 2021 for the complete duration
The automatic response and the coordinated actions taken by the TSOs in Continental Europe ensured that the situation was quickly restored close to normal operation. The contracted interruptible services in Italy and in France were reconnected at 14:47 CET and 14:48 CET respectively prior to the resynchronisation of the North-West and South-East areas at 15:08 CET.
ENTSO-E continues to keep the European Commission and the Electricity Coordination Group, composed of representatives of Member States, informed and updated with detailed results of the preliminary technical analyses.
Based on the preliminary technical analyses presented above, a formal investigation following the legal framework under the Commission Regulation (EU) 2017/1485 of 2 August 2017 (System Operation Guideline) will be established, whereby National Regulatory Authorities and ACER are invited to join with TSOs in an Expert Investigation Panel.
In line with the provisions of the mentioned Commission Regulation (EU) 2017/1485 of 2 August 2017, ENTSO-E will present the results of the investigation to the Electricity Coordination Group and will subsequently publish a report once the analysis is completed.
Note: All ﬁgures and details about the sequence of the events are still subject to ﬁnal investigation and possible changes.
The transmission grids of the countries of Continental Europe are electrically tied together to operate synchronously at the frequency of approximately 50 Hz. An event on 8 January 2021 caused the Continental Europe synchronous area to separate into two areas, with an area in the South-East of Europe being temporarily operating in separation from the rest of Continental Europe.
Is this the ﬁrst time such an event happens in Continental Europe?
The Continental Europe synchronous area is one of the largest interconnected synchronous electricity systems in the world in terms of its size and number of supplied customers. Such a kind of event can happen in any electric power system. System resilience and preparedness of system operators in charge have a decisive impact on the consequences of such events. A separation of the synchronous area with a much larger disturbance and impacts on customers took place in Continental Europe on the 4 November 2006. This event was extensively analysed and led to a number of substantial developments, like the European Awareness System (EAS) which is a platform allowing TSOs to exchange operational information in real time, enabling them to react immediately in case of unusual system condition. The TSOs are therefore well prepared in order to coordinate and manage such events and limit the consequences. This preparedness and a permanent observation of the system frequency allowed to resynchronize the two separated areas in a very short period of time.
How are countermeasures coordinated in Continental Europe in case of frequency deviations?
In Continental Europe, procedures are in place to avoid system disturbances and especially large frequency deviations with the risk of uncoordinated disconnection of customers or generation. The TSOs Amprion (Germany) and Swissgrid (Switzerland) are responsible for these procedures in their role as synchronous area monitor (SAM) in Continental Europe. The SAM continuously monitors the system frequency. In case of large frequency deviations, they inform all TSOs via the European Awareness System (EAS) and launch an extraordinary procedure for frequency deviations to coordinate countermeasures in a fast and effective manner in order to stabilize the system. One step of this procedure is a telephone conference by Amprion, Swissgrid, RTE (France), Terna (Italy) and REE (Spain). This teleconference took place at 14:09 CET on 8 January 2021. In the telephone conference, the situation was evaluated, the TSOs informed about countermeasures which were already activated. The TSOs of the North-West and South-East Area also coordinated the actions for reconnection in order to reach one synchronous area in Continental Europe again.
Were end customers disconnected? Were there any other consequences?
Customers in the order of 70 MW in the North-East Area and in the order of 163 MW in the South-East Area were disconnected. Due to the high resilience of the interconnected network and the rapid response of European TSOs, the security of operation and electricity supply was not endangered further. An important contribution to stabilizing the system was delivered by the previously contracted interruptible services, which were activated in France and Italy. Such contracts which have been agreed with customers allow the TSO to temporarily and automatically reduce the electrical consumption depending on the situation with the electric power system in real-time.
What is an electrical busbar?
A busbar is an electrical junction in a substation, which connects overhead lines, cables and transformers through electrical switches. Usually, there are several busbars in a substation, which can be connected by a busbar coupler. Are there special devices protecting the equipment in a substation? Various devices protect the equipment in a substation. One of them is an overcurrent protection relay, disconnecting automatically the equipment (Eg. overhead line or cable) if the electrical current becomes so high that it can cause damage to the equipment. A current which is higher than the material out of which the equipment is rated for (Eg. aluminium wrapped around a steel carrier rope) will cause mechanical damage and can also endanger people and other assets, if for instance a damaged overhead line drops on the ground without being disconnected. Another type of protection are distance protection relays, which measure a combination of current and voltage in time and act in a selective way to protect equipment depending on the distance of the failure from the equipment.
What next steps are foreseen for the investigation?
According to the Article 15 of the Commission Regulation (EU) 2017/1485, for a Scale 2 event such as the one of 08 January 2021, an Expert Investigation Panel shall be set up composed of TSO experts, to which also National Regulatory Authorities and ACER are invited. The Expert Investigation Panel will produce a report which describes in detail the sequence of events, root causes and — if applicable — necessary actions to contribute to preventing similar events in the future. The next steps, timeline and ﬁnal publishing dates, as well as all other relevant information, will be published on the ENTSO-E website.