10 interesting stories served every morning and every evening.

1 1,147 shares, 107 trendiness, words and minutes reading time

Firefox 85 Cracks Down on Supercookies – Mozilla Security Blog

Trackers and adtech com­pa­nies have long abused browser fea­tures to fol­low peo­ple around the web. Since 2018, we have been ded­i­cated to re­duc­ing the num­ber of ways our users can be tracked. As a first line of de­fense, we’ve blocked cook­ies from known track­ers and scripts from known fin­ger­print­ing com­pa­nies.

In Firefox 85, we’re in­tro­duc­ing a fun­da­men­tal change in the browser’s net­work ar­chi­tec­ture to make all of our users safer: we now par­ti­tion net­work con­nec­tions and caches by the web­site be­ing vis­ited. Trackers can abuse caches to cre­ate su­per­cook­ies and can use con­nec­tion iden­ti­fiers to track users. But by iso­lat­ing caches and net­work con­nec­tions to the web­site they were cre­ated on, we make them use­less for cross-site track­ing.

In short, su­per­cook­ies can be used in place of or­di­nary cook­ies to store user iden­ti­fiers, but  they are much more dif­fi­cult to delete and block. This makes it nearly im­pos­si­ble for users to pro­tect their pri­vacy as they browse the web. Over the years, track­ers have been found stor­ing user iden­ti­fiers as su­per­cook­ies in in­creas­ingly ob­scure parts of the browser, in­clud­ing in Flash stor­age, ETags, and HSTS flags.

The changes we’re mak­ing in Firefox 85 greatly re­duce the ef­fec­tive­ness of cache-based su­per­cook­ies by elim­i­nat­ing a track­er’s abil­ity to use them across web­sites.

Like all web browsers, Firefox shares some in­ter­nal re­sources be­tween web­sites to re­duce over­head. Firefox’s im­age cache is a good ex­am­ple: if the same im­age is em­bed­ded on mul­ti­ple web­sites, Firefox will load the im­age from the net­work dur­ing a visit to the first web­site and on sub­se­quent web­sites would tra­di­tion­ally load the im­age from the browser’s lo­cal im­age cache (rather than re­load­ing from the net­work). Similarly, Firefox would reuse a sin­gle net­work con­nec­tion when load­ing re­sources from the same party em­bed­ded on mul­ti­ple web­sites. These tech­niques are in­tended to save a user band­width and time.

Unfortunately, some track­ers have found ways to abuse these shared re­sources to fol­low users around the web. In the case of Firefox’s im­age cache, a tracker can cre­ate a su­per­cookie by  encoding” an iden­ti­fier for the user in a cached im­age on one web­site, and then retrieving” that iden­ti­fier on a dif­fer­ent web­site by em­bed­ding the same im­age. To pre­vent this pos­si­bil­ity, Firefox 85 uses a dif­fer­ent im­age cache for every web­site a user vis­its. That means we still load cached im­ages when a user re­vis­its the same site, but we don’t share those caches across sites.

In fact, there are many dif­fer­ent caches track­ers can abuse to build su­per­cook­ies. Firefox 85 par­ti­tions all of the fol­low­ing caches by the top-level site be­ing vis­ited: HTTP cache, im­age cache, fav­i­con cache, HSTS cache, OCSP cache, style sheet cache, font cache, DNS cache, HTTP Authentication cache, Alt-Svc cache, and TLS cer­tifi­cate cache.

To fur­ther pro­tect users from con­nec­tion-based track­ing, Firefox 85 also par­ti­tions pooled con­nec­tions, prefetch con­nec­tions, pre­con­nect con­nec­tions, spec­u­la­tive con­nec­tions, and TLS ses­sion iden­ti­fiers.

This par­ti­tion­ing ap­plies to all third-party re­sources em­bed­ded on a web­site, re­gard­less of whether Firefox con­sid­ers that re­source to have loaded from a track­ing do­main. Our met­rics show a very mod­est im­pact on page load time: be­tween a 0.09% and 0.75% in­crease at the 80th per­centile and be­low, and a max­i­mum in­crease of 1.32% at the 85th per­centile. These im­pacts are sim­i­lar to those re­ported by the Chrome team for sim­i­lar cache pro­tec­tions they are plan­ning to roll out.

Systematic net­work par­ti­tion­ing makes it harder for track­ers to cir­cum­vent Firefox’s anti-track­ing fea­tures, but we still have more work to do to con­tinue to strengthen our pro­tec­tions. Stay tuned for more pri­vacy pro­tec­tions in the com­ing months!

Re-architecting how Firefox han­dles net­work con­nec­tions and caches was no small task, and would not have been pos­si­ble with­out the tire­less work of our en­gi­neer­ing team: Andrea Marchesini, Tim Huang, Gary Chen, Johann Hofmann, Tanvi Vyas, Anne van Kesteren, Ethan Tseng, Prangya Basu, Wennie Leung, Ehsan Akhgari, and Dimi Lee.

We wish to ex­press our grat­i­tude to the many Mozillians who con­tributed to and sup­ported this work, in­clud­ing: Selena Deckelmann, Mikal Lewis, Tom Ritter, Eric Rescorla, Olli Pettay, Kim Moir, Gregory Mierzwinski, Doug Thayer, and Vicky Chin.

We also want to ac­knowl­edge past and on­go­ing ef­forts car­ried out by col­leagues in the Brave, Chrome, Safari and Tor Browser teams to com­bat su­per­cook­ies in their own browsers.


Read the original on blog.mozilla.org »

2 473 shares, 30 trendiness, words and minutes reading time

Firefox 85.0, See All New Features, Updates and Fixes

We’d like to ex­tend a spe­cial thank you to all of the new Mozillians who con­tributed to this re­lease of Firefox.

At Mozilla, we be­lieve you have a right to pri­vacy. You should­n’t be tracked on­line. Whether you are check­ing your bank bal­ance, look­ing for the best doc­tor, or shop­ping for shoes, un­scrupu­lous track­ing com­pa­nies should not be able to track you as you browse the Web. For that rea­son, we are con­tin­u­ously work­ing to harden Firefox against on­line track­ing of our users.


Read the original on www.mozilla.org »

3 455 shares, 8 trendiness, words and minutes reading time

Halt and Catch Fire Syllabus

This site fea­tures a cur­ricu­lum de­vel­oped around the tele­vi­sion se­ries, Halt and Catch Fire (2014-2017), a fic­tional nar­ra­tive about peo­ple work­ing in tech dur­ing the 1980s-1990s.

The in­tent is for this web­site to be used by self-form­ing small groups that want to cre­ate a watching club” (like a book club) and dis­cuss as­pects of tech­nol­ogy his­tory that are fea­tured in this se­ries.

There are 15 classes, for a semester-long” course:

~ #01 ~ #02 ~ #03 ~ #04 ~ #05 ~ #06 ~ #07 ~ #08 ~ #09 ~ #10 ~ #11 ~ #12 ~ #13 ~ #14 ~ #15 ~

* Apéritifs Casual view­ing pre­sented be­fore gath­er­ing. This is en­ter­tain­ment; not re­quired view­ing.

* RFC as koan A Request for Comments from the Internet Engineering Task Force, for re­flect­ing on.

* Emulation as koan An em­u­lated com­puter in the browser, also for re­flec­tion.

* Themes Recommendations for top­ics to be dis­cussed.

* Readings Related ma­te­r­ial for deeper think­ing on the class topic.

* Description Brief sum­mary of what’s go­ing on in the episodes and how it re­lates to tech his­tory at large / the weekly topic.

* Episode sum­maries A link to sum­maries of the episodes that should be watched prior to meet­ing as a group. Watching each episode is not re­quired; if time does­n’t al­low, re­fer to the sum­maries. Content warn­ings are pro­vided for rel­e­vant episodes. If there are spe­cific con­cerns, this can de­ter­mine which episodes should be skipped or an­tic­i­pated be­fore view­ing.

Curriculum and web­site de­signed by Ashley Blewer.

see also ↠ source code & site meta­data


Read the original on bits.ashleyblewer.com »

4 341 shares, 19 trendiness, words and minutes reading time

Postgres scaling advice for 2021 - CYBERTEC

ServicesPostgreSQL ad­min­is­tra­tionPost­greSQL clus­ter­ing and HAPostgreSQLSolutions — Who uses PostgreSQL

So, you’re build­ing the next uni­corn startup and are think­ing fever­ishly about a fu­ture-proof PostgreSQL ar­chi­tec­ture to house your bytes? My ad­vice here, hav­ing seen dozens of hope­lessly over-en­gi­neered / over­sized so­lu­tions as a data­base con­sul­tant over the last 5 years, is short and blunt: Don’t over­think, and keep it sim­ple on the data­base side! Instead of get­ting fancy with the data­base, fo­cus on your ap­pli­ca­tion. Turn your mi­cro­scope to the data­base only when the need ac­tu­ally arises, m’kay! When that day comes, first of all, try all the com­mon ver­ti­cal scale-up ap­proaches and tricks. Try to avoid us­ing de­riv­a­tive Postgres prod­ucts, or em­ploy­ing dis­trib­uted ap­proaches, or home-brewed shard­ing at all costs — un­til you have, say, less than 1 year of breath­ing room avail­able. Wow, what kind of ad­vice is that for 2021? I’m talk­ing about a sim­ple, sin­gle-node ap­proach in the age of Big Data and hy­per-scal­a­bil­ity…I surely must be a Luddite or just still dizzy from too much New Year’s Eve cham­pagne. Well, per­haps so, but let’s start from a bit fur­ther back…Post­greSQL and MySQL — broth­ers from an­other moth­erOver the hol­i­days, I fi­nally had a bit of time to catch up on my tech read­ing / watch­ing TODO-list (still dozens of items left though, arghh)…and one pretty good talk was on the past and cur­rent state of dis­trib­uted MySQL ar­chi­tec­tures by Peter Zaitsev of Percona. Oh, MySQL??? No no, we haven’t changed horses” sud­denly, PostgreSQL is still our main fo­cus 🙂 It’s just that in many key points per­tain­ing to scal­ing, the same con­straints ac­tu­ally also ap­ply to PostgreSQL. After all, they’re both de­signed as sin­gle-node re­la­tional data­base man­age­ment en­gines.In short, I’m sum­ma­riz­ing some ideas out of the talk, plus adding some of my own. I would like to pro­vide some food for thought to those who are overly wor­ried about data­base per­for­mance — thus pre­ma­turely latch­ing onto some overly com­plex ar­chi­tec­tures. In do­ing so, the worriers” sac­ri­fice some other good prop­er­ties of sin­gle-node data­bases — like us­abil­ity, and be­ing bul­let-proof.All dis­trib­uted sys­tems are in­her­ently com­plex, and dif­fi­cult to get rightIf you’re new to this realm, just trust me on the above, OK? There’s a bunch of aban­doned or stale pro­jects which have tried to of­fer some fully or semi-au­to­mat­i­cally scal­able, highly avail­able, easy-to-use and easy-to-man­age DBMS…and failed! It’s not an ut­terly bad thing to try though, since we can learn from it. Actually, some prod­ucts are get­ting pretty close to the Holy Grail of dis­trib­uted SQL data­bases (CockroachDB comes to mind first). However, I’m afraid we still have to live with the CAP the­o­rem. Also, re­mem­ber that to go from cov­er­ing 99.9% of cor­ner cases of com­plex ar­chi­tec­tures to cov­er­ing 99.99% is not a mat­ter of lin­ear com­plex­ity/​cost, but rather ex­po­nen­tial com­plex­ity/​cost!Al­though af­ter a cer­tain amount of time a com­pany like Facebook surely needs some kind of hor­i­zon­tal scal­ing, maybe you’re not there yet, and maybe stock Postgres can still pro­vide you some years of stress-free co­hab­i­ta­tion. Consider: Do you even have a run­way for that long?* A sin­gle PostgreSQL in­stance can eas­ily do hun­dreds of thou­sands of trans­ac­tions per sec­ond­For ex­am­ple, on my (pretty av­er­age) work­sta­tion, I can do ca. 25k sim­ple read trans­ac­tions per 1 CPU core on an in mem­ory” pg­bench dataset…with the de­fault con­fig for Postgres v13! With some tun­ing (by the way, tun­ing reads is much harder in Postgres than tun­ing writes!) I was able to in­crease it to ~32k TPS per core, mean­ing: a top-notch, ded­i­cated hard­ware server can do about 1 mil­lion short reads! With reads, you can also usu­ally em­ploy repli­cas — so mul­ti­ply that by 10 if needed! You then need to some­how solve the query rout­ing prob­lem, but there are tools for that. In some cases, the new stan­dard LibPQ con­nec­tion string syn­tax (target_session_attrs) can be used — with some shuf­fling. By the way, Postgres does­n’t limit the num­ber of repli­cas, though I per­son­ally have never wit­nessed more than 10 repli­cas. With some cas­cad­ing, I’m sure you could run dozens with­out big­ger is­sues.* A sin­gle node can typ­i­cally do tens of thou­sands of write trans­ac­tions per sec­on­dOn my hum­ble work­sta­tion with 6 cores (12 log­i­cal CPUs) and NVMe SSD stor­age, the de­fault very write-heavy (3 UPD, 1 INS, 1 SEL) pgbench” test greets me with a num­ber of around 45k TPS — for ex­am­ple, af­ter some check­point tun­ing — and there are even more tun­ing tricks avail­able.* A sin­gle Postgres in­stance can eas­ily han­dle dozens of Terabytes of data­Given that you have sep­a­rated hot” and cold” data sets, and there’s some thought put into in­dex­ing, etc., a sin­gle Postgres in­stance can cope with quite a lot of data. Backups and standby server pro­vi­sion­ing, etc. will be a pain, since you’ll surely meet some phys­i­cal lim­its even on the finest hard­ware. However, these is­sues are com­mon to all data­base sys­tems. From the query per­for­mance side, there is no rea­son why it should sud­denly be forced to slow down!* A sin­gle node in­stance is lit­er­ally bul­let-proof as far as data con­sis­tency is con­cerned­Given that 1) you de­clare your con­straints cor­rectly, 2) don’t fool around with some fsync” or asyn­chro­nous com­mit set­tings, and 3) your disks don’t ex­plode, a sin­gle node in­stance pro­vides rock-solid data con­sis­tency. Again, the last item ap­plies to any data stor­age, so hope­fully, you have some” back­ups some­where…* Failures are eas­ily com­pre­hen­si­ble — thus also re­cov­er­able­Mean­ing: that even if some­thing very bad hap­pens and the pri­mary node is down, the worst out­come is that your ap­pli­ca­tion is just cur­rently un­avail­able. Once you do your re­cov­ery magic (or bet­ter, let some bot like Patroni take care of that) you’re ex­actly where you were pre­vi­ously. Now com­pare that with some par­tial fail­ure sce­nar­ios or data hash­ing er­rors in a dis­trib­uted world! Believe me, when work­ing with crit­i­cal data, in a lot of cases it’s bet­ter to have a short down­time than to have to sort out some run­away datasets for days or weeks to come, which is con­fus­ing for your­self and your cus­tomers.Tips to be pre­pared for scalin­gIn the be­gin­ning of the post, I said that when start­ing out, you should­n’t worry too much about scal­ing from the ar­chi­tec­tural side. That does­n’t mean that you should ig­nore some com­mon best prac­tices, in case scal­ing could the­o­ret­i­cally be re­quired later. Some of them might be:* Don’t be afraid to run your own data­baseThis might be the most im­por­tant thing on the list — with mod­ern real hard­ware (or some metal cloud in­stances) and the full power of con­fig and filesys­tem tun­ing and ex­ten­sions, you’ll typ­i­cally do just fine on a sin­gle node for years. Remember that if you get tired of run­ning your own setup, nowa­days you can al­ways mi­grate to some cloud providers — with min­i­mal down­time — via Logical Replication! If you want to know how, see here. Note that I specif­i­cally men­tioned real” hard­ware above, due to the com­mon mis­con­cep­tion that a sin­gle cloud vCPU is pretty much equal to a real one…the re­al­ity is far from that of course — my own im­pres­sion over the years has been that there is around a 2-3x per­for­mance dif­fer­ence, de­pend­ing on the provider/​re­gion/​luck fac­tor in ques­tion.* Try to avoid the se­ri­ous mis­take of hav­ing your data architecture” cen­tered around a sin­gle huge tab­leY­ou’d be sur­prised how of­ten we see that…so slice and dice early, or set up some par­ti­tion­ing. Partitioning will also do a lot of good to the long-term health of the data­base, since it al­lows mul­ti­ple au­to­vac­uum work­ers on the same log­i­cal table, and it can speed up IO con­sid­er­ably on en­ter­prise stor­age. If IO in­deed be­comes a bot­tle­neck at some point, you can em­ploy Postgres na­tive re­mote par­ti­tions, so that some older data lives on an­other node.* Make sure to bake in” a proper shard­ing key for your ta­bles/​data­bases Initially, the data can just re­side on a sin­gle phys­i­cal node. If your data model re­volves around the millions of in­de­pen­dent clients” con­cept for ex­am­ple, then it might even be best to start with many sharded” data­bases with iden­ti­cal schemas, so that trans­fer­ring out the shards to sep­a­rate hard­ware nodes will be a piece of cake in the fu­ture.There are ben­e­fits to sys­tems that can scale 1000x from day one…but in many cases, there’s also an un­rea­son­able (and costly) de­sire to be ready for scal­ing. I get it, it’s very hu­man — I’m also tempted to buy a nice BMW con­vert­ible with a max­i­mum speed of 250 kilo­me­ters per hour…only to dis­cover that the max­i­mum al­lowed speed in my coun­try is 110, and even that dur­ing the sum­mer months.The thing that res­onated with me from the Youtube talk the most was that there’s a def­i­nite down­side to such the­o­ret­i­cal scal­ing ca­pa­bil­ity — it throt­tles de­vel­op­ment ve­loc­ity and op­er­a­tional man­age­ment ef­fi­ciency at early stages! Having a plain rock-solid data­base that you know well, and which also ac­tu­ally per­forms well — if you know how to use it — is most of­ten a great place to start with.By the way, here’s an­other good link on a sim­i­lar note from a nice Github col­lec­tion and also one pretty de­tailed overview here about how an Alexa top 250 com­pany man­aged to get by with a sin­gle data­base for 12 years be­fore need­ing dras­tic scal­ing ac­tion!To sum it all up: this is prob­a­bly a good place to quote the clas­sics: pre­ma­ture op­ti­miza­tion is the root of all evil…


Read the original on www.cybertec-postgresql.com »

5 304 shares, 24 trendiness, words and minutes reading time

Backblaze Hard Drive Stats for 2020

In 2020, Backblaze added 39,792 hard dri­ves and as of December 31, 2020 we had 165,530 dri­ves un­der man­age­ment. Of that num­ber, there were 3,000 boot dri­ves and 162,530 data dri­ves. We will dis­cuss the boot dri­ves later in this re­port, but first we’ll fo­cus on the hard drive fail­ure rates for the data drive mod­els in op­er­a­tion in our data cen­ters as of the end of December. In ad­di­tion, we’ll wel­come back Western Digital to the farm and get a look at our nascent 16TB and 18TB dri­ves. Along the way, we’ll share ob­ser­va­tions and in­sights on the data pre­sented and as al­ways, we look for­ward to you do­ing the same in the com­ments.

At the end of 2020, Backblaze was mon­i­tor­ing 162,530 hard dri­ves used to store data. For our eval­u­a­tion, we re­move from con­sid­er­a­tion 231 dri­ves which were used for test­ing pur­poses and those drive mod­els for which we did not have at least 60 dri­ves. This leaves us with 162,299 hard dri­ves in 2020, as listed be­low.

The 231 dri­ves not in­cluded in the list above were ei­ther used for test­ing or did not have at least 60 dri­ves of the same model at any time dur­ing the year. The data for all dri­ves, data dri­ves, boot dri­ves, etc., is avail­able for down­load on the Hard Drive Test Data web­page.

For dri­ves which have less than 250,000 drive days, any con­clu­sions about drive fail­ure rates are not jus­ti­fied. There is not enough data over the year-long pe­riod to reach any con­clu­sions. We pre­sent the mod­els with less than 250,000 drive days for com­plete­ness only.

For drive mod­els with over 250,000 drive days over the course of 2020, the Seagate 6TB drive (model: ST6000DX000) leads the way with a 0.23% an­nu­al­ized fail­ure rate (AFR). This model was also the old­est, in av­er­age age, of all the dri­ves listed. The 6TB Seagate model was fol­lowed closely by the peren­nial con­tenders from HGST: the 4TB drive (model: HMS5C4040ALE640) at 0.27%, the 4TB drive (model: HMS5C4040BLE640), at 0.27%, the 8TB drive (model: HUH728080ALE600) at 0.29%, and the 12TB drive (model: HUH721212ALE600) at 0.31%.

The AFR for 2020 for all drive mod­els was 0.93%, which was less than half the AFR for 2019. We’ll dis­cuss that later in this re­port.

We had a goal at the be­gin­ning of 2020 to di­ver­sify the num­ber of drive mod­els we qual­i­fied for use in our data cen­ters. To that end, we qual­i­fied nine new dri­ves mod­els dur­ing the year, as shown be­low.

Actually, there were two ad­di­tional hard drive mod­els which were new to our farm in 2020: the 16TB Seagate drive (model: ST16000NM005G) with 26 dri­ves, and the 16TB Toshiba drive (model: MG08ACA16TA) with 40 dri­ves. Each fell be­low our 60-drive thresh­old and were not listed.

The goal of qual­i­fy­ing ad­di­tional drive mod­els proved to be prophetic in 2020, as the ef­fects of Covid-19 be­gan to creep into the world econ­omy in March 2020. By that time we were well on our way to­wards our goal and while be­ing less of a cre­ative so­lu­tion than drive farm­ing, drive model di­ver­si­fi­ca­tion was one of the tac­tics we used to man­age our sup­ply chain through the man­u­fac­tur­ing and ship­ping de­lays preva­lent in the first sev­eral months of the pan­demic.

The last time a Western Digital (WDC) drive model was listed in our re­port was Q2 2019. There are still three 6TB WDC dri­ves in ser­vice and 261 WDC boot dri­ves, but nei­ther are listed in our re­ports, so no WDC dri­ves—un­til now. In Q4 a to­tal of 6,002 of these 14TB dri­ves (model: WUH721414ALE6L4) were in­stalled and were op­er­a­tional as of December 31st.

These dri­ves ob­vi­ously share their lin­eage with the HGST dri­ves, but they re­port their man­u­fac­turer as WDC ver­sus HGST. The model num­bers are sim­i­lar with the first three char­ac­ters chang­ing from HUH to WUH and the last three char­ac­ters chang­ing from 604, for ex­am­ple, to 6L4. We don’t know the sig­nif­i­cance of that change, per­haps it is the fac­tory lo­ca­tion, a firmware ver­sion, or some other des­ig­na­tion. If you know, let every­one know in the com­ments. As with all of the ma­jor drive man­u­fac­tur­ers, the model num­ber car­ries pat­terned in­for­ma­tion re­lat­ing to each drive model and is not ran­domly gen­er­ated, so the 6L4 string would ap­pear to mean some­thing use­ful.

WDC is back with a splash, as the AFR for this drive model is just 0.16%—that’s with 6,002 dri­ves in­stalled, but only for 1.7 months on av­er­age. Still, with only one fail­ure dur­ing that time, they are off to a great start. We are look­ing for­ward to see­ing how they per­form over the com­ing months.

There are six Seagate drive mod­els that were new to our farm in 2020. Five of these mod­els are listed in the table above and one model had only 26 dri­ves, so it was not listed. These dri­ves ranged in size from 12TB to 18TB and were used for both mi­gra­tion re­place­ments as well as new stor­age. As a group, they to­taled 13,596 dri­ves and amassed 1,783,166 drive days with just 46 fail­ures for an AFR of 0.94%.

The new Toshiba 14TB drive (model: MG07ACA14TA) and the new Toshiba 16TB (model: MG08ACA16TEY) were in­tro­duced to our data cen­ters in 2020 and they are putting up ze­ros, as in zero fail­ures. While each drive model has only been in­stalled for about two months, they are off to a great start.

The chart be­low com­pares the AFR for each of the last three years. The data for each year is in­clu­sive of that year only and for the drive mod­els pre­sent at the end of each year.

The AFR for 2020 dropped be­low 1% down to 0.93%. In 2019, it stood at 1.89%. That’s over a 50% drop year over year. So why was the 2020 AFR so low? The an­swer: It was a group ef­fort. To start, the older dri­ves: 4TB, 6TB, 8TB, and 10TB dri­ves as a group were sig­nif­i­cantly bet­ter in 2020, de­creas­ing from a 1.35% AFR in 2019 to a 0.96% AFR in 2020. At the other end of the size spec­trum, we added over 30,000 larger dri­ves: 14TB, 16TB, and 18TB, which as a group recorded an AFR of 0.89% for 2020. Finally, the 12TB dri­ves as a group had a 2020 AFR of 0.98%. In other words, whether a drive was old or new, or big or small, they per­formed well in our en­vi­ron­ment in 2020.

The chart be­low shows the life­time an­nu­al­ized fail­ure rates of all of the dri­ves mod­els in pro­duc­tion as of December 31, 2020.

Confidence in­ter­vals give you a sense of the use­ful­ness of the cor­re­spond­ing AFR value. A nar­row con­fi­dence in­ter­val range is bet­ter than a wider range, with a very wide range mean­ing the cor­re­spond­ing AFR value is not sta­tis­ti­cally use­ful. For ex­am­ple, the con­fi­dence in­ter­val for the 18TB Seagate dri­ves (model: ST18000NM000J) ranges from 1.5% to 45.8%. This is very wide and one should con­clude that the cor­re­spond­ing 12.54% AFR is not a true mea­sure of the fail­ure rate of this drive model. More data is needed. On the other hand, when we look at the 14TB Toshiba drive (model: MG07ACA14TA), the range is from 0.7% to 1.1% which is fairly nar­row, and our con­fi­dence in the 0.9% AFR is much more rea­son­able.

We al­ways ex­clude boot dri­ves from our re­ports as their func­tion is very dif­fer­ent from a data drive. While it may not seem ob­vi­ous, hav­ing 3,000 boot dri­ves is a bit of a mile­stone. It means we have 3,000 Backblaze Storage Pods in op­er­a­tion as of December 31st. All of these Storage Pods are or­ga­nized into Backblaze Vaults of 20 Storage Pods each or 150 Backblaze Vaults.

Over the last year or so, we moved from us­ing hard dri­ves to SSDs as boot dri­ves. We have a lit­tle over 1,200 SSDs act­ing as boot dri­ves to­day. We are val­i­dat­ing the SMART and fail­ure data we are col­lect­ing on these SSD boot dri­ves. We’ll keep you posted if we have any­thing worth pub­lish­ing.

The com­plete data set used to cre­ate the in­for­ma­tion used in this re­view is avail­able on our Hard Drive Test Data page. You can down­load and use this data for free for your own pur­pose. All we ask are three things: 1) you cite Backblaze as the source if you use the data, 2) you ac­cept that you are solely re­spon­si­ble for how you use the data, and 3) you do not sell this data to any­one; it is free.

If you just want the sum­ma­rized data used to cre­ate the ta­bles and charts in this blog post you can down­load the ZIP file con­tain­ing the CSV files for each chart.

Good luck and let us know if you find any­thing in­ter­est­ing.


Read the original on www.backblaze.com »

6 303 shares, 6 trendiness, words and minutes reading time

Open Collective


Read the original on opencollective.com »

7 294 shares, 25 trendiness, words and minutes reading time

FDA approves first long-acting injectable to treat HIV infection

n a move that could trans­form HIV treat­ment, the Food and Drug Administration has ap­proved a monthly in­jectable med­ica­tion, a reg­i­men de­signed to ri­val pills that must be taken daily.

The newly ap­proved med­i­cine, which is called Cabenuva, rep­re­sents a sig­nif­i­cant ad­vance in treat­ing what con­tin­ues to be a highly in­fec­tious dis­ease. In 2018, for in­stance, there were ap­prox­i­mately 36,400 newly in­fected pa­tients liv­ing with HIV in the U. S., ac­cord­ing to the Centers for Disease Control and Prevention. About 1.7 mil­lion peo­ple world­wide be­came newly in­fected in 2019, ac­cord­ing to UNAIDS.

Although sev­eral med­i­cines ex­ist for treat­ing HIV, ViiV Healthcare is bank­ing on the im­proved con­ve­nience of get­ting a monthly shot, even if it must be ad­min­is­tered by a health care provider. The com­pany, which is largely con­trolled by GlaxoSmithKline (GSK), gath­ered data show­ing nine of 10 pa­tients in piv­otal stud­ies claimed to pre­fer the shot over tak­ing pills each day.

This ap­proval will al­low some pa­tients the op­tion of re­ceiv­ing once-monthly in­jec­tions in lieu of a daily oral treat­ment reg­i­men,” said John Farley, who heads the Office of Infectious Diseases in the FDAs Center for Drug Evaluation and Research, in a state­ment. Having this treat­ment avail­able for some pa­tients pro­vides an al­ter­na­tive for man­ag­ing this chronic con­di­tion.”

Two clin­i­cal stud­ies of more than 1,100 pa­tients from 16 coun­tries found that Cabenuva was as ef­fec­tive in sup­press­ing the virus as the daily, oral, three-drug reg­i­mens that were taken by pa­tients through­out the 48-week study pe­riod. However, pa­tients must first take an oral ver­sion of the in­jectable med­i­cine and an­other pill for the first month be­fore piv­ot­ing to monthly shots, ac­cord­ing to the FDA.

The cost, how­ever, is steep — the list, or whole­sale, price is $3,960 a month, or more than $47,500 a year. The list price for the one-time ini­ti­a­tion dose is $5,490. However, a ViiV spokes­woman ex­plained that a 30-day oral lead-in,” which is re­quired as part of the ap­proval, will be made avail­able at no charge to pa­tients. She also main­tained the list price for the monthly shot is within the range” of HIV treat­ment pills on the mar­ket to­day.

There is also con­sid­er­able com­pe­ti­tion from Gilead Sciences (GILD), which mar­kets sev­eral HIV med­i­cines. During 2019, its HIV treat­ment fran­chise gen­er­ated $17.2 bil­lion in rev­enue, a 12.5% year-over-year in­crease, and grabbed more than 80% of the mar­ket for pa­tients start­ing HIV ther­apy, ac­cord­ing to Cowen an­a­lyst Phil Nadeau. He ex­pects HIV med­ica­tion sales to hit $24.4 bil­lion in 2025, but points to HIV pre­ven­tion pills as big dri­vers of that in­crease.

ViiV, mean­while, is de­vel­op­ing an every-other-month in­jectable in hopes of cap­tur­ing a more sub­stan­tial mar­ket share.

Last November, an in­terim analy­sis found cabote­gravir — a com­po­nent of Cabenuva — was 89% more ef­fec­tive in pre­vent­ing in­fec­tion among women than Truvada, a Gilead pill that must be taken daily and is the cur­rent stan­dard of care. And a sep­a­rate analy­sis re­leased in July showed the every-other-month shot pro­tected un­in­fected peo­ple by 66% more com­pared with Truvada in at-risk men and trans­gen­der women who have sex with men.

We see Cabenuva as the be­gin­ning of long-act­ing treat­ment for HIV,” said Kimberly Smith, who heads global re­search and med­ical strat­egy at ViiV, who ex­pects to seek FDA ap­proval for this ver­sion of the shot in com­ing weeks. We’re open­ing the door with Cabenuva and will only cre­ate more hunger for other long-act­ing ther­a­pies. It re­ally be­comes a sort of an­chor.”

The ap­proval, by the way, comes more than a year af­ter ViiV hoped to win ap­proval for the treat­ment. In late 2019, the FDA is­sued a so-called com­plete re­sponse let­ter and ex­plained ap­proval could not be granted at the time due to is­sues with chem­istry, man­u­fac­tur­ing and con­trols, which are used to de­ter­mine safety, ef­fec­tive­ness and qual­ity.


Read the original on www.statnews.com »

8 289 shares, 4 trendiness, words and minutes reading time

Gay Dating App "Grindr" to be fined almost € 10 Mio

In January 2020, the Norwegian Consumer Council and the European pri­vacy NGO noyb.eu filed against Grindr and sev­eral adtech com­pa­nies over il­le­gal shar­ing of users’ data. Like many other apps, Grindr shared per­sonal data (like lo­ca­tion data or the fact that some­one uses Grindr) to po­ten­tially hun­dreds of third par­ties for ad­ver­tis­ment.

Today, the Norwegian Data Protection Authority up­held the com­plaints, con­firm­ing that Grindr did not re­cive valid con­sent from users in an ad­vance no­ti­fi­ca­tion. The Authority im­poses a fine of 100 Mio NOK (€ 9.63 Mio or $ 11.69 Mio) on Grindr. An enor­mous fine, as Grindr only re­ported a profit of $ 31 Mio in 2019 - a third of which is now gone.

filed three strate­gic GDPR com­plaints in co­op­er­a­tion with noyb. The com­plaints were filed with the Norwegian Data Protection Authority (DPA) against the gay dat­ing app Grindrand five adtech com­pa­nies that were re­ceiv­ing per­sonal data through the app: Twit­ter`s MoPub, AT&T’s AppNexus (now Xandr

Grindr was di­rectly and in­di­rectly send­ing highly per­sonal data to po­ten­tially hun­dreds of ad­ver­tis­ing part­ners. by the NCC de­scribed in de­tail how a large num­ber of third par­ties con­stantly re­ceive per­sonal data about Grindr’s a user opens Grindr, in­for­ma­tion like the cur­rent lo­ca­tion, or the fact that a per­son uses Grindr is broad­casted to ad­ver­tis­ers. This in­for­ma­tion is also used to cre­ate com­pre­hen­sive pro­files about users, which can be used for tar­geted ad­ver­tis­ing and other pur­poses.

un­am­bigu­ous, in­formed, spe­cific and freely given. The Norwegian DPA held that the al­leged consent” Grindr tried to rely on was in­valid. Users were nei­ther prop­erly in­formed, nor was the con­sent spe­cific enough, as users had to agree to the en­tire pri­vacy pol­icy and not to a spe­cific pro­cess­ing op­er­a­tion, such as the shar­ing of data with other com­pa­nies.

Consent must also be freely given. The DPA high­lighted that users should have a real choice not to con­sent with­out any neg­a­tive con­se­quences. Grindr made use of the app con­di­tional on con­sent­ing to data shar­ing or to pay­ing a sub­scrip­tion fee.

The mes­sage is sim­ple: take it or leave it’ is not con­sent. If you rely on un­law­ful consent’ you are sub­ject to a hefty fine. This does not only con­cern Grindr, but many web­sites and apps.” — Ala Krinickytė, Data pro­tec­tion lawyer at noyb

This not only sets lim­its for Grindr, but es­tab­lishes strict le­gal re­quire­ments on a whole in­dus­try that prof­its from col­lect­ing and shar­ing in­for­ma­tion about our pref­er­ences, lo­ca­tion, pur­chases, phys­i­cal and men­tal health, sex­ual ori­en­ta­tion, and po­lit­i­cal views​​​​​​​Finn Myrstad, Director of dig­i­tal pol­icy in the Norwegian Consumer Council (NCC).

Grindr must po­lice ex­ter­nal Partners”. Moreover, the Norwegian DPA con­cluded that Grindr failed to con­trol and take re­spon­si­bil­ity” for their data shar­ing with third par­ties. Grindr shared data with po­ten­tially hun­dreds of thrid par­ties, by in­clud­ing track­ing codes into its app. It then blindly trusted these adtech com­pa­nies to com­ply with an opt-out’ sig­nal that is sent to the re­cip­i­ents of the data. The DPA noted that com­pa­nies could eas­ily ig­nore the sig­nal and con­tinue to process per­sonal data of users. The lack of any fac­tual con­trol and re­spon­si­bil­ity over the shar­ing of users’ data from Grindr is not in line with the ac­count­abil­ity prin­ci­ple of Article 5(2) GDPR. Many com­pa­nies in the in­dus­try use such sig­nal, mainly the TCF frame­work by the Interactive Advertising Bureau (IAB).

Companies can­not just in­clude ex­ter­nal soft­ware into their prod­ucts and then hope that they com­ply with the law. Grindr in­cluded the track­ing code of ex­ter­nal part­ners and for­warded user data to po­ten­tially hun­dreds of third par­ties - it now also has to en­sure that these partners’ com­ply with the law.”

Grindr: Users may be bi-curious”, but not gay? The GDPR spe­cially pro­tects in­for­ma­tion about sex­ual ori­en­ta­tion. Grindr how­ever took the view, that such pro­tec­tions do not ap­ply to its users, as the use of Grindr would not re­veal the sex­ual ori­en­ta­tion of its cus­tomers. The com­pany ar­gued that users may be straight or bi-curious” and still use the app. The Norwegian DPA did not buy this ar­gu­ment from an app that iden­ti­fies it­self as be­ing exclusively for the gay/​bi com­mu­ni­ty’. The ad­di­tional ques­tion­able ar­gu­ment by Grindr that users made their sex­ual ori­en­ta­tion manifestly pub­lic” and it is there­fore not pro­tected was equally re­jected by the DPA.

An app for the gay com­mu­nity, that ar­gues that the spe­cial pro­tec­tions for ex­actly that com­mu­nity ac­tu­ally do not ap­ply to them, is rather re­mark­able. I am not sure if Grindr’s lawyers have re­ally thought this through.”

Successful ob­jec­tion un­likely. The Norwegian DPA is­sued an advanced no­tice” af­ter hear­ing Grindr in a pro­ce­dure. Grindr can still ob­ject to the de­ci­sion within 21 days, which will be re­viewed by the DPA. However it is un­likely that the out­come could be changed in any ma­te­r­ial way. However fur­ther fines may be up­com­ing as Grindr is now re­ly­ing on a new con­sent sys­tem and al­leged legitimate in­ter­est” to use data with­out user con­sent. This is in con­flict with the de­ci­sion of the Norwegian DPA, as it ex­plic­itly held that any ex­ten­sive dis­clo­sure … for mar­ket­ing pur­poses should be based on the data sub­jec­t’s con­sent”.

The case is clear from the fac­tual and le­gal side. We do not ex­pect any suc­cess­ful ob­jec­tion by Grindr. However, more fines may be in the pipeline for Grindr as it lately claims an un­law­ful legitimate in­ter­est’ to share user data with third par­ties - even with­out con­sent. Grindr may be bound for a sec­ond round.”

* The pro­ject was led by the Norwegian Consumer Council

* The tech­ni­cal tests were car­ried out by the se­cu­rity com­pany mnemonic.

* The re­search on the adtech in­dus­try and spe­cific data bro­kers was per­formed with as­sis­tance from the re­searcher Wolfie Christl of Cracked Labs.

* Additional au­dit­ing of the Grindr app was per­formed by the re­searcher Zach Edwards of MetaX.

* The le­gal analy­sis and for­mal com­plaints were writ­ten with as­sis­tance from noyb.


Read the original on noyb.eu »

9 274 shares, 2 trendiness, words and minutes reading time

The Battle of GameStop

Over the past sev­eral weeks, GameStop stock has traded more like a cryp­tocur­rency than a fail­ing mall-based re­tailer.

What is go­ing on here?

In one sen­tence: Institutional in­vestors short GameStop (i.e., the pre­vail­ing wis­dom, at least un­til the past few weeks) are play­ing a game of chicken with re­tail in­vestors & con­trar­ian in­sti­tu­tions who are long. [1]

GameStop is a video game re­tailer; it has been in de­cline for sev­eral years now. Video games have moved to an on­line, di­rect-to-con­sumer dis­tri­b­u­tion model. Foot traf­fic in malls (where most GameStops are lo­cated) was down even be­fore COVID; many mall-based re­tail­ers are strug­gling.

Unsurprisingly, over the course of 2020 this led to GameStop be­com­ing one of the most shorted stocks on Wall Street.

However, there were early signs that GameStop was un­der­val­ued. Michael Burry (of The Big Short fame) took a large long po­si­tion in 2019, claim­ing video game discs are not en­tirely dead. In August 2020, Roaring Kitty (a.k.a. u/​Deep­Fuck­ing­Value on Reddit) pub­lished a video de­tail­ing why GameStop was a good play based on its fun­da­men­tals — a fu­ture short squeeze would just be the ic­ing on the cake.

On January 11th, Ryan Cohen (founder of Chewy, which sold to PetSmart for $3.35 bil­lion), joined GameStop’s board af­ter his in­vest­ment firm built up a 10% stake in the com­pany. At this point, re­tail in­vestors, es­pe­cially those on the pop­u­lar sub­red­dit Wall Street Bets went crazy. They high­lighted that GameStop was now a growth play. It is now led by a pre­vi­ously suc­cess­ful founder, its on­line busi­ness is grow­ing at a 300% rate, and it is in the process of turn­ing around its core busi­ness. [2] As such, GameStop should be val­ued at a Venture Capital mul­ti­ple of 10x+ rev­enue, rather than a measly 0.5x rev­enue.

This nar­ra­tive is com­pelling. Despite short sell­ers warn­ing oth­er­wise, GameStop has con­tin­ued to climb in price. All of the GameStop op­tions is­sued (with a high strike price of $60) were in the money on Friday (1/22/2021), trig­ging a gamma squeeze as in­sti­tu­tions who had writ­ten the op­tions rushed to cover their po­si­tions. GameStop closed Friday at $65.01.

On Monday (1/25/2021), GameStop opened at $96.73, spiked at $159.18 (likely be­cause of an­other gamma squeeze), [3] then crashed with pres­sure from in­sti­tu­tional shorts, clos­ing at $76.79 (still up 18% day-over-day).

But the bulls aren’t fin­ished with GameStop. These gamma squeezes are noth­ing com­pared to what will be com­ing: the near-myth­i­cal Infinity Squeeze”. Most fa­mously seen with Volkswagen in 2008, when short sell­ers are forced to cover their po­si­tions due to a mar­gin call, the price of the stock rapidly rises (hypothetically to in­fin­ity) since the num­ber of shares shorted ex­ceeds the num­ber of shares avail­able to buy.

Can a sub­red­dit com­prised of re­tail in­vestors re­ally move the mar­ket like this? I doubt it — all of the big swings in this stock have been caused by in­sti­tu­tions. What this sub­red­dit does is con­trol the nar­ra­tive.

First un­veiled to the main­stream fi­nance world in a February 2020 Bloomberg ar­ti­cle, Wall Street Bets is pro­fane (as I’m sure you’ve no­ticed if you clicked any of the links in this post). But Wall Street Bets is­n’t some sin­is­ter, mar­ket-ma­nip­u­lat­ing en­tity. Rather, it is a vir­tual wa­ter cooler for in­di­vid­ual re­tail in­vestors to post memes — and emo­jis, oh so many emo­jis — about their in­vest­ments.

Reading Wall Street Bets feels like the dis­cus­sion at a mid­dle school cafe­te­ria table circa 2000. Redditors on Wall Street Bets (who re­fer to them­selves af­fec­tion­ately as autists” or retards”) [4] en­cour­age one an­other to have diamond hands” (💎 🤲), the will to stay strong and not sell a stock when things are go­ing poorly. Contrast this with the paper hands” (🧻 🤲) of those who are weak-willed and sell a stock based on mar­ket sen­ti­ment. Companies are headed to the moon” (🚀). Bears are not men­tioned with­out the ad­jec­tive gay” (🌈 🐻). Self-deprecating cuck­old ref­er­ences to my wife’s boyfriend” abound.

Despite this lan­guage (or per­haps be­cause of it), Wall Street Bets is one of the most en­ter­tain­ing and in­for­ma­tive places on the in­ter­net. People post mean­ing­ful analy­sis of com­pa­nies that are un­der­val­ued and why they are in­vest­ing. Browsing the sub­red­dit, you get a crash course on con­cepts that you would oth­er­wise learn only at a buy-side firm or work­ing as an op­tions trader: EBITDA mul­ti­ple, book value, delta hedg­ing, im­plied volatil­ity.

But the most com­pelling as­pect of Wall Street Bets is in its name: the bets. The abil­ity to gain (or lose) a life-chang­ing amount of money — with screen­shots to prove it — cre­ates an en­vi­ron­ment sim­i­lar to that of the casino floor. And if Wall Street Bets is the casino floor, then Wall Street it­self is the house.

The same emo­tion that caused us to root for the thieves in Ocean’s 11 is what makes Wall Street Bets so en­tic­ing. Put frankly, Millennials are tired of get­ting fucked by the man. When you’re un­der­em­ployed with $100,000 in stu­dent loan debt, your fi­nan­cial sit­u­a­tion feels over­whelm­ing. You re­ally don’t want to take the ad­vice of your par­ents or CNBC talk­ing heads [5] to in­vest 10% of your salary for a 4% an­nual re­turn. At that point, what’s an­other $5,000? Might as well buy some short-dated GME calls.

For those of use who don’t fit the un­der­em­ployed Millennial ar­che­type, Matt Levine’s Boredom Markets Hypothesis ap­plies. COVID has re­quired us to work from home, with­out much abil­ity to spend for travel, din­ing, or en­ter­tain­ment. Putting money into Robinhood is a de­cent sub­sti­tute — with the added bonus of it be­ing an investment”, rather than con­sump­tion. In an age where the Fed will print seem­ingly un­lim­ited money to prop up cap­i­tal mar­kets, bet­ter to be ir­ra­tional ex­u­ber­ant as a part of the mar­ket than be left out of the party.

Further, the nar­ra­tive pre­sented by the GameStop trade in par­tic­u­lar is com­pelling. It al­lows the small re­tail in­vestor to play a role in mar­ket events nor­mally only played out at the hedge fund scale (a short squeeze was a key plot el­e­ment in Season 1 of Billions). The short sell­ers in this case aren’t par­tic­u­larly sym­pa­thetic: Andrew Left of Citron Research re­leased a video in which he lays out the bear case for GameStop. His main ar­gu­ment was a smug ap­peal to au­thor­ity, es­sen­tially claim­ing Wall Street knows bet­ter than you peo­ple on mes­sage boards”. [6]

So sure, Wall Street Bets is ir­rev­er­ent, has ir­ra­tional ex­u­ber­ance, and is guilty of hero wor­ship (Elon Musk and more re­cently Ryan Cohen). But it also pro­vides a sense of com­mu­nity dur­ing the stresses of COVID and pro­vides a com­pelling way for the lit­tle guy to stick it to the man.

As Keynes re­minded us (in the most overused fi­nance quote of all time): The mar­kets can re­main ir­ra­tional longer than you can re­main sol­vent.” When enough peo­ple be­lieve in a vi­sion, it can cause that vi­sion to man­i­fest it­self. Wall Street is scared that re­tail in­vestors can man­i­fest their own vi­sion, rather that the one dic­tated by the ma­jor fi­nan­cial play­ers.

The en­tire GameStop sce­nario is a case study in re­flex­iv­ity. [7] Reflexivity is the idea that our per­cep­tion of cir­cum­stances in­flu­ences re­al­ity, which then fur­ther im­pacts our per­cep­tion of re­al­ity, in a self-re­in­forc­ing loop. Specifically, in a fi­nan­cial mar­ket, prices are a re­flec­tion of traders’ ex­pec­ta­tions. Those prices then in­flu­ence traders’ ex­pec­ta­tions, and so on.

This may seem ob­vi­ous to some, but it flies in the face of the ef­fi­cient-mar­ket hy­poth­e­sis. As Soros states,

What makes re­flex­iv­ity in­ter­est­ing is that the pre­vail­ing bias has ways, via the mar­ket prices, to af­fect the so-called fun­da­men­tals that mar­ket prices are sup­posed to re­flect. [8]

What does this mean for GameStop? Because of traders’ bull­ish sen­ti­ment, a pre­vi­ously fail­ing com­pany is now in the po­si­tion where it can lever­age the overnight in­crease in value to make real, sub­stan­tive changes to its busi­ness. GameStop can pay off debt through the is­suance of new shares or make strate­gic ac­qui­si­tions us­ing its newly-valu­able shares. [9] A strug­gling com­pany could be­come solid sim­ply not be­cause of a change in the un­der­ly­ing busi­ness, but be­cause in­vestors de­cided it should be more valu­able.

Reflexivity may be the best way to un­der­stand the 21st Century. Passive in­vest­ing is an ex­am­ple of re­flex­iv­ity in ac­tion. [10] So is win­ner-take-all ven­ture in­vest­ing. Uber raised an ab­surd war chest, caus­ing more in­vestors to want to pile in, which led to more fundrais­ing and even­tu­ally a suc­cess­ful IPO. The fact that Uber has not yet turned a profit, yet to­day has a $100 bil­lion mar­ket cap, can­not be ex­plained with tra­di­tional fi­nan­cial think­ing, but can be ex­plained by re­flex­iv­ity.

The in­ter­net and in­stant com­mu­ni­ca­tion only ac­cel­er­ates these trends. Instances of re­flex­iv­ity like the strange mar­ket move­ments we’ve seen with GameStop are hap­pen­ing more and more — not only in fi­nan­cial mar­kets, but also in the po­lit­i­cal and so­cial realm, to in­cred­i­ble ef­fect.

When Donald Trump won the pres­i­dency in 2016, I dis­tinctly re­mem­ber writ­ing in my jour­nal: Anything is pos­si­ble.” I was blown away that this com­plete buf­foon of a man, some­one who the Huffington Post re­fused to cover as pol­i­tics meme-d his way into the pres­i­dency. He was a joke, un­til sud­denly, in a Tulpa-esque twist…he was­n’t. Similarly, in­ter­net con­spir­acy the­o­ries spread via Facebook memes man­i­fested them­selves in the real world when Trump sup­port­ers stormed the Capitol a few weeks ago.

Our per­cep­tion shapes re­al­ity. And when enough peo­ple agree on a spe­cific per­cep­tion, it be­comes re­al­ity. [11] As we be­come more and more con­nected, dis­course will ex­pand and and ac­cel­er­ate. We’re go­ing to see some strange things be­come re­al­ity.

Even, per­haps, hedge funds go­ing bank­rupt and newly-minted mil­lion­aires, all be­cause of some peo­ple who wrote about a strug­gling video game re­tailer on Reddit.

Retail in­vestors ba­si­cally just shut a hedge fund down.

Citadel and Point72 are in­vest­ing (backstopping) $2.75 bil­lion into Melvin Capital who was su­per­man short $GME GameStop

Melvin down over 30% in 2021

Melvin cap is run by Gabe Plotkin a Steve Cohen SAC pro­tege— Will Meade (@realwillmeade) January 25, 2021

[1] For the best sum­mary of the cur­rent sit­u­a­tion, see Matt Levine.

[2] When you take into ac­count the clo­sures of poorly-per­form­ing stores, per-store rev­enue and prof­its are up.

[3] Options were writ­ten up to a strike price of $115 and these all were in the money.

[4] Not con­don­ing the lan­guage, but Wall Street Bets mem­bers with trad­ing gains of­ten make do­na­tions to these causes.

[5] Wall Street Bets has a love/​hate (mostly hate) re­la­tion­ship with Jim Cramer, a.k.a. Chillman Boomer”.

[6] Andrew Left is an in­ter­est­ing char­ac­ter. That said, I’m not here to at­tack him per­son­ally, and no­body in their right mind would con­done the al­leged threats made against him by GameStop bulls up­set by his stance on the com­pany.

[7] Good in­tro to Soros’s Theory of Reflexivity in this Financial Times ar­ti­cle.

[9] More de­tails in this Reddit post.

[10] Passive in­vest­ing is also help­ing GameStop’s run — as the price of the stock in­creases, in­dex funds need to buy more shares to re-weight, which in turn dri­ves up the price. Reflexivity.

[11] This is my fa­vorite re­but­tal for those who claim cryptocurrency has no in­trin­sic value”. Sure — but nei­ther does the U. S. dol­lar. We just all de­cided that it would have value, so it does.


Read the original on paranoidenough.com »

10 264 shares, 5 trendiness, words and minutes reading time


On 8 January 2021 at 14:05 CET the syn­chro­nous area of Continental Europe was sep­a­rated into two parts due to out­ages of sev­eral trans­mis­sion net­work el­e­ments in a very short time. ENTSO-E has pub­lished the first in­for­ma­tion on the event al­ready on 8 January 2021, fol­lowed by an up­date with ge­o­graph­i­cal view and time se­quence on 15 January 2021. Since then, ENTSO-E has analysed a large por­tion of rel­e­vant data aim­ing to re­con­struct the event in de­tail.

This sec­ond up­date pre­sents the key find­ings of de­tailed analy­ses, which have a pre­lim­i­nary char­ac­ter sub­ject to new facts, which will emerge in the still on­go­ing in­ves­ti­ga­tion.

The analysed se­quence of events con­cludes that the ini­tial event was the trip­ping of a 400 kV bus­bar cou­pler in the sub­sta­tion Ernestinovo (Croatia) by over­cur­rent pro­tec­tion at 14:04:25.9. This re­sulted in a de­cou­pling of the two bus­bars in the Ernestinovo sub­sta­tion, which in turn sep­a­rated North-West and south-east elec­tric power flows in this sub­sta­tion. As shown in Figure 1 be­low, North-West bound lines which re­mained con­nected to one bus­bar, con­nect Ernestinovo to Zerjavinec (Croatia) and Pecs (Hungary), while South-East bound lines which re­mained con­nected to an­other bus­bar, con­nect Ernestinovo to Ugljevik (Bosnia-Herzegovina) and Sremska Mitrovica (Serbia).

Figure 1 - Decoupling of two bus­bars in Ernestinovo

The sep­a­ra­tion of flows in the Ernestinovo sub­sta­tion, lead to the shift­ing of elec­tric power flows to neigh­bour­ing lines which were sub­se­quently over­loaded. At 14:04:48.9, the line Subotica — Novi Sad (Serbia) tripped due to over­cur­rent pro­tec­tion. This was fol­lowed by the fur­ther trip­ping of lines due to dis­tance pro­tec­tion, as shown in Figure 2, be­low, lead­ing even­tu­ally to the sys­tem sep­a­ra­tion into two parts at 14:05:08.6.

Figure 2 - Tripping of ad­di­tional trans­mis­sion net­work el­e­ments af­ter the de­cou­pling of two bus­bars in Ernestinovo

The route where the two parts of the Continental Europe Synchronous Area were sep­a­rated is shown in Figure 3 be­low:

The sys­tem sep­a­ra­tion re­sulted in a deficit of power (approx. -6.3 GW) in the North-West Area and a sur­plus of power (approx. +6.3 GW) in the South-East Area, re­sult­ing in turn in a fre­quency de­crease in the North-West Area and a fre­quency in­crease in the South-East Area.

At ap­prox­i­mately 14:05 CET, the fre­quency in the North-West Area ini­tially de­creased to a value of 49.74 Hz within a pe­riod of around 15 sec­onds be­fore quickly reach­ing a steady state value of ap­prox­i­mately 49.84 Hz. At the same time, the fre­quency in the South-East Area ini­tially in­creased up to 50.6 Hz be­fore set­tling at a steady state fre­quency be­tween 50.2 Hz and 50.3 Hz as il­lus­trated in Figure 4 be­low:

Figure 4 - Frequency in Continental Europe dur­ing the event on 8 January 2021 right af­ter the dis­tur­bance and dur­ing re­syn­chro­ni­sa­tion

Due to the low fre­quency in the North-West Area, con­tracted in­ter­rupt­ible ser­vices in France and Italy (in to­tal around 1.7 GW) were dis­con­nected in or­der to re­duce the fre­quency de­vi­a­tion. These ser­vices are pro­vided by large cus­tomers who are con­tracted by the re­spec­tive Transmission System Operators (TSOs) to be dis­con­nected if fre­quency drops un­der a cer­tain thresh­old. In ad­di­tion, 420 MW and 60 MW of sup­port­ive power were au­to­mat­i­cally ac­ti­vated from the Nordic and Great Britain syn­chro­nous ar­eas re­spec­tively. These coun­ter­mea­sures en­sured that al­ready at 14:09 CET the fre­quency de­vi­a­tion from the nom­i­nal value of 50 Hz was re­duced to around 0.1 Hz in the North-West area (Figure 4).

In or­der to re­duce the high fre­quency in the South-East Area, au­to­matic and man­ual coun­ter­mea­sures were ac­ti­vated, in­clud­ing the re­duc­tion of gen­er­a­tion out­put (Eg. au­to­matic dis­con­nec­tion of a 975 MW gen­er­a­tor in Turkey at 14:04:57). As a con­se­quence, the fre­quency in the South-East Area re­turned to 50.2 Hz at 14:29 CET and re­mained within con­trol lim­its (49.8 and 50.2 Hz) un­til the re­syn­chro­ni­sa­tion of the two sep­a­rated ar­eas took place at 15:07:31.6 CET.

Between 14:30 CET and 15:06 CET the fre­quency in the South-East area was fluc­tu­at­ing be­tween 49.9 Hz and 50.2 Hz due to the rather small size of the South-East Area where also sev­eral pro­duc­tion units were dis­con­nected (Figure 5). During this pe­riod, the fre­quency in the North-West Area fluc­tu­ated far less and re­mained close to the nom­i­nal value, due to the rather large size of the North-West Area. This fre­quency be­hav­iour is a sub­ject of fur­ther de­tailed in­ves­ti­ga­tion.

Figure 5 - Frequency in Continental Europe dur­ing the event on 8 January 2021 for the com­plete du­ra­tion

The au­to­matic re­sponse and the co­or­di­nated ac­tions taken by the TSOs in Continental Europe en­sured that the sit­u­a­tion was quickly re­stored close to nor­mal op­er­a­tion. The con­tracted in­ter­rupt­ible ser­vices in Italy and in France were re­con­nected at 14:47 CET and 14:48 CET re­spec­tively prior to the re­syn­chro­ni­sa­tion of the North-West and South-East ar­eas at 15:08 CET.

ENTSO-E con­tin­ues to keep the European Commission and the Electricity Coordination Group, com­posed of rep­re­sen­ta­tives of Member States, in­formed and up­dated with de­tailed re­sults of the pre­lim­i­nary tech­ni­cal analy­ses.

Based on the pre­lim­i­nary tech­ni­cal analy­ses pre­sented above, a for­mal in­ves­ti­ga­tion fol­low­ing the le­gal frame­work un­der the Commission Regulation (EU) 2017/1485 of 2 August 2017 (System Operation Guideline) will be es­tab­lished, whereby National Regulatory Authorities and ACER are in­vited to join with TSOs in an Expert Investigation Panel.

In line with the pro­vi­sions of the men­tioned Commission Regulation (EU) 2017/1485 of 2 August 2017, ENTSO-E will pre­sent the re­sults of the in­ves­ti­ga­tion to the Electricity Coordination Group and will sub­se­quently pub­lish a re­port once the analy­sis is com­pleted.

Note: All fig­ures and de­tails about the se­quence of the events are still sub­ject to fi­nal in­ves­ti­ga­tion and pos­si­ble changes.

The trans­mis­sion grids of the coun­tries of Continental Europe are elec­tri­cally tied to­gether to op­er­ate syn­chro­nously at the fre­quency of ap­prox­i­mately 50 Hz. An event on 8 January 2021 caused the Continental Europe syn­chro­nous area to sep­a­rate into two ar­eas, with an area in the South-East of Europe be­ing tem­porar­ily op­er­at­ing in sep­a­ra­tion from the rest of Continental Europe.

Is this the first time such an event hap­pens in Continental Europe?

The Continental Europe syn­chro­nous area is one of the largest in­ter­con­nected syn­chro­nous elec­tric­ity sys­tems in the world in terms of its size and num­ber of sup­plied cus­tomers. Such a kind of event can hap­pen in any elec­tric power sys­tem. System re­silience and pre­pared­ness of sys­tem op­er­a­tors in charge have a de­ci­sive im­pact on the con­se­quences of such events. A sep­a­ra­tion of the syn­chro­nous area with a much larger dis­tur­bance and im­pacts on cus­tomers took place in Continental Europe on the 4 November 2006. This event was ex­ten­sively analysed and led to a num­ber of sub­stan­tial de­vel­op­ments, like the European Awareness System (EAS) which is a plat­form al­low­ing TSOs to ex­change op­er­a­tional in­for­ma­tion in real time, en­abling them to re­act im­me­di­ately in case of un­usual sys­tem con­di­tion. The TSOs are there­fore well pre­pared in or­der to co­or­di­nate and man­age such events and limit the con­se­quences. This pre­pared­ness and a per­ma­nent ob­ser­va­tion of the sys­tem fre­quency al­lowed to re­syn­chro­nize the two sep­a­rated ar­eas in a very short pe­riod of time.

How are coun­ter­mea­sures co­or­di­nated in Continental Europe in case of fre­quency de­vi­a­tions?

In Continental Europe, pro­ce­dures are in place to avoid sys­tem dis­tur­bances and es­pe­cially large fre­quency de­vi­a­tions with the risk of un­co­or­di­nated dis­con­nec­tion of cus­tomers or gen­er­a­tion. The TSOs Amprion (Germany) and Swissgrid (Switzerland) are re­spon­si­ble for these pro­ce­dures in their role as syn­chro­nous area mon­i­tor (SAM) in Continental Europe. The SAM con­tin­u­ously mon­i­tors the sys­tem fre­quency. In case of large fre­quency de­vi­a­tions, they in­form all TSOs via the European Awareness System (EAS) and launch an ex­tra­or­di­nary pro­ce­dure for fre­quency de­vi­a­tions to co­or­di­nate coun­ter­mea­sures in a fast and ef­fec­tive man­ner in or­der to sta­bi­lize the sys­tem. One step of this pro­ce­dure is a tele­phone con­fer­ence by Amprion, Swissgrid, RTE (France), Terna (Italy) and REE (Spain). This tele­con­fer­ence took place at 14:09 CET on 8 January 2021. In the tele­phone con­fer­ence, the sit­u­a­tion was eval­u­ated, the TSOs in­formed about coun­ter­mea­sures which were al­ready ac­ti­vated. The TSOs of the North-West and South-East Area also co­or­di­nated the ac­tions for re­con­nec­tion in or­der to reach one syn­chro­nous area in Continental Europe again.

Were end cus­tomers dis­con­nected? Were there any other con­se­quences?

Customers in the or­der of 70 MW in the North-East Area and in the or­der of 163 MW in the South-East Area were dis­con­nected. Due to the high re­silience of the in­ter­con­nected net­work and the rapid re­sponse of European TSOs, the se­cu­rity of op­er­a­tion and elec­tric­ity sup­ply was not en­dan­gered fur­ther. An im­por­tant con­tri­bu­tion to sta­bi­liz­ing the sys­tem was de­liv­ered by the pre­vi­ously con­tracted in­ter­rupt­ible ser­vices, which were ac­ti­vated in France and Italy. Such con­tracts which have been agreed with cus­tomers al­low the TSO to tem­porar­ily and au­to­mat­i­cally re­duce the elec­tri­cal con­sump­tion de­pend­ing on the sit­u­a­tion with the elec­tric power sys­tem in real-time.

What is an elec­tri­cal bus­bar?

A bus­bar is an elec­tri­cal junc­tion in a sub­sta­tion, which con­nects over­head lines, ca­bles and trans­form­ers through elec­tri­cal switches. Usually, there are sev­eral bus­bars in a sub­sta­tion, which can be con­nected by a bus­bar cou­pler. Are there spe­cial de­vices pro­tect­ing the equip­ment in a sub­sta­tion? Various de­vices pro­tect the equip­ment in a sub­sta­tion. One of them is an over­cur­rent pro­tec­tion re­lay, dis­con­nect­ing au­to­mat­i­cally the equip­ment (Eg. over­head line or ca­ble) if the elec­tri­cal cur­rent be­comes so high that it can cause dam­age to the equip­ment. A cur­rent which is higher than the ma­te­r­ial out of which the equip­ment is rated for (Eg. alu­minium wrapped around a steel car­rier rope) will cause me­chan­i­cal dam­age and can also en­dan­ger peo­ple and other as­sets, if for in­stance a dam­aged over­head line drops on the ground with­out be­ing dis­con­nected. Another type of pro­tec­tion are dis­tance pro­tec­tion re­lays, which mea­sure a com­bi­na­tion of cur­rent and volt­age in time and act in a se­lec­tive way to pro­tect equip­ment de­pend­ing on the dis­tance of the fail­ure from the equip­ment.

What next steps are fore­seen for the in­ves­ti­ga­tion?

According to the Article 15 of the Commission Regulation (EU) 2017/1485, for a Scale 2 event such as the one of 08 January 2021, an Expert Investigation Panel shall be set up com­posed of TSO ex­perts, to which also National Regulatory Authorities and ACER are in­vited. The Expert Investigation Panel will pro­duce a re­port which de­scribes in de­tail the se­quence of events, root causes and — if ap­plic­a­ble — nec­es­sary ac­tions to con­tribute to pre­vent­ing sim­i­lar events in the fu­ture. The next steps, time­line and fi­nal pub­lish­ing dates, as well as all other rel­e­vant in­for­ma­tion, will be pub­lished on the ENTSO-E web­site.


Read the original on www.entsoe.eu »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.