10 interesting stories served every morning and every evening.




1 1,361 shares, 303 trendiness, words and minutes reading time

How I cut GTA Online loading times by 70%

GTA Online. Infamous for its slow load­ing times. Having picked up the game again to fin­ish some of the newer heists I was shocked (/s) to dis­cover that it still loads just as slow as the day it was re­leased 7 years ago.

It was time. Time to get to the bot­tom of this.

First I wanted to check if some­one had al­ready solved this prob­lem. Most of the re­sults I found pointed to­wards anec­data about how the game is so so­phis­ti­cated that it needs to load so long, sto­ries on how the p2p net­work ar­chi­tec­ture is rub­bish (not say­ing that it is­n’t), some elab­o­rate ways of load­ing into story mode and a solo ses­sion af­ter that and a cou­ple of mods that al­lowed skip­ping the startup R* logo video. Some more read­ing told me we could save a whop­ping 10-30 sec­onds with these com­bined!

Meanwhile on my PC…

I know my setup is dated but what on earth could take 6x longer to load into on­line mode? I could­n’t mea­sure any dif­fer­ence us­ing the story-to-on­line load­ing tech­nique as oth­ers have found be­fore me. Even if it did work the re­sults would be down in the noise.

If this poll is to be trusted then the is­sue is wide­spread enough to mildly an­noy more than 80% of the player base. It’s been 7 years R*!

Looking around a bit to find who are the lucky ~20% that get sub 3 minute load times I came across a few bench­marks with high-end gam­ing PCs and an on­line mode load time of about 2 min­utes. I would hack for a 2 minute load time! It does seem to be hard­ware-de­pen­dent but some­thing does­n’t add up here…

How come their story mode still takes near a minute to load? (The M.2 one did­n’t count the startup lo­gos btw.) Also, load­ing story to on­line takes them only a minute more while I’m get­ting about five more. I know that their hard­ware specs are a lot bet­ter but surely not 5x bet­ter.

Armed with such pow­er­ful tools as the Task Manager I be­gan to in­ves­ti­gate what re­sources could be the bot­tle­neck.

After tak­ing a minute to load the com­mon re­sources used for both story and on­line modes (which is near on par with high-end PCs) GTA de­cides to max out a sin­gle core on my ma­chine for four min­utes and do noth­ing else.

Disk us­age? None! Network us­age? There’s a bit, but it drops ba­si­cally to zero af­ter a few sec­onds (apart from load­ing the ro­tat­ing info ban­ners). GPU us­age? Zero. Memory us­age? Completely flat…

What, is it min­ing crypto or some­thing? I smell code. Really bad code.

While my old AMD CPU has 8 cores and it does pack a punch, it was made in the olden days. Back when AMDs sin­gle-thread per­for­mance was way be­hind Intel’s. This might not ex­plain all of the load time dif­fer­ences but it should ex­plain most of it.

What’s odd is that it’s us­ing up just the CPU. I was ex­pect­ing vast amounts of disk reads load­ing up re­sources or loads of net­work re­quests try­ing to ne­go­ti­ate a ses­sion in the p2p net­work. But this? This is prob­a­bly a bug.

Profilers are a great way of find­ing CPU bot­tle­necks. There’s only one prob­lem - most of them rely on in­stru­ment­ing the source code to get a per­fect pic­ture of what’s hap­pen­ing in the process. And I don’t have the source code. Nor do I need mi­crosec­ond-per­fect read­ings - I have 4 min­utes’ worth of a bot­tle­neck.

Enter stack sam­pling: for closed source ap­pli­ca­tions there’s only one op­tion. Dump the run­ning process’ stack and cur­rent in­struc­tion point­er’s lo­ca­tion to build a call­ing tree in set in­ter­vals. Then add them up to get sta­tis­tics on what’s go­ing on. There’s only one pro­filer that I know of (might be ig­no­rant here) that can do this on Windows. And it has­n’t been up­dated in over 10 years. It’s Luke Stackwalker! Someone, please give this pro­ject some love :)

Normally Luke would group the same func­tions to­gether but since I don’t have de­bug­ging sym­bols I had to eye­ball nearby ad­dresses to guess if it’s the same place. And what do we see? Not one bot­tle­neck but two of them!

Having bor­rowed my friend’s com­pletely le­git­i­mate copy of the in­dus­try-stan­dard dis­as­sem­bler (no, I re­ally can’t af­ford the thing… gonna learn to ghidra one of these days) I went to take GTA apart.

That does­n’t look right at all. Most high-pro­file games come with built-in pro­tec­tion against re­verse en­gi­neer­ing to keep away pi­rates, cheaters, and mod­ders. Not that it has ever stopped them.

There seems to be some sort of an ob­fus­ca­tion/​en­cryp­tion at play here that has re­placed most in­struc­tions with gib­ber­ish. Not to worry, we sim­ply need to dump the game’s mem­ory while it’s ex­e­cut­ing the part we want to look at. The in­struc­tions have to be de-ob­fus­cated be­fore run­ning one way or an­other. I had Process Dump ly­ing around, so I used that, but there are plenty of other tools avail­able to do this sort of thing.

Disassembling the now-less-ob­fus­cated dump re­veals that one of the ad­dresses has a la­bel pulled out of some­where! It’s strlen? Going down the call stack the next one is la­beled vs­can_fn and af­ter that the la­bels end, tho I’m fairly con­fi­dent it’s ss­canf.

It’s pars­ing some­thing. Parsing what? Untangling the dis­as­sem­bly would take for­ever so I de­cided to dump some sam­ples from the run­ning process us­ing x64dbg. Some de­bug-step­ping later it turns out it’s… JSON! They’re pars­ing JSON. A whop­ping 10 megabytes worth of JSON with some 63k item en­tries.

What is it? It ap­pears to be data for a net shop cat­a­log” ac­cord­ing to some ref­er­ences. I as­sume it con­tains a list of all the pos­si­ble items and up­grades you can buy in GTA Online.

Clearing up some con­fu­sion: I beleive these are in-game money pur­chasable items, not di­rectly linked with mi­cro­trans­ac­tions.

But 10 megs? That’s noth­ing! And us­ing ss­canf may not be op­ti­mal but surely it’s not that bad? Well…

Yeah, that’s gonna take a while… To be fair I had no idea most ss­canf im­ple­men­ta­tions called strlen so I can’t blame the de­vel­oper who wrote this. I would as­sume it just scanned byte by byte and could stop on a NULL.

Turns out the sec­ond of­fender is called right next to the first one. They’re both even called in the same if state­ment as seen in this ugly de­com­pi­la­tion:

All la­bels are mine, no idea what the func­tions/​pa­ra­me­ters are ac­tu­ally called.

The sec­ond prob­lem? Right af­ter pars­ing an item, it’s stored in an ar­ray (or an in­lined C++ list? not sure). Each en­try looks some­thing like this:

But be­fore it’s stored? It checks the en­tire ar­ray, one by one, com­par­ing the hash of the item to see if it’s in the list or not. With ~63k en­tries that’s (n^2+n)/2 = (63000^2+63000)/2 = 1984531500 checks if my math is right. Most of them use­less. You have unique hashes why not use a hash map.

I named it hashmap while re­vers­ing but it’s clearly not_a_hashmap. And it gets even bet­ter. The hash-ar­ray-list-thing is empty be­fore load­ing the JSON. And all of the items in the JSON are unique! They don’t even need to check if it’s in the list or not! They even have a func­tion to di­rectly in­sert the items! Just use that! Srsly, WAT!?

Now that’s nice and all, but no one is go­ing to take me se­ri­ously un­less I test this so I can write a click­bait ti­tle for the post.

The plan? Write a .dll, in­ject it in GTA, hook some func­tions, ???, profit.

The JSON prob­lem is hairy, I can’t re­al­is­ti­cally re­place their parser. Replacing ss­canf with one that does­n’t de­pend on strlen would be more re­al­is­tic. But there’s an even eas­ier way.

* cache” the start and length of it

* if it’s called again within the string’s range, re­turn cached value

And as for the hash-ar­ray prob­lem, it’s more straight­for­ward - just skip the du­pli­cate checks en­tirely and in­sert the items di­rectly since we know the val­ues are unique.

Well, did it work then?

Hell yes, it did! :))

Most likely, this won’t solve every­one’s load times - there might be other bot­tle­necks on dif­fer­ent sys­tems, but it’s such a gap­ing hole that I have no idea how R* has missed it all these years.

* It turns out GTA strug­gles to parse a 10MB JSON file

* The JSON parser it­self is poorly built / naive and

If this some­how reaches Rockstar: the prob­lems should­n’t take more than a day for a sin­gle dev to solve. Please do some­thing about it :<

You could ei­ther switch to a hashmap for the de-du­pli­ca­tion or com­pletely skip it on startup as a faster fix. For the JSON parser - just swap out the li­brary for a more per­for­mant one. I don’t think there’s any eas­ier way out.

...

Read the original on nee.lv »

2 622 shares, 43 trendiness, words and minutes reading time

Actually Portable Executables

I came across Cosmopolitan on Hacker News, and I was ini­tially con­fused, due to a few mem­o­ries of cross-com­pi­la­tion night­mares: while it should be pos­si­ble to com­pile for the same ar­chi­tec­ture re­gard­less of op­er­at­ing sys­tem, would­n’t the OS get con­fused by the lead­ing bytes of the ex­e­cutable? I read the ar­ti­cle ex­plain­ing how it works, but most of it went over my head.

The ex­am­ple on the Github README used the fol­low­ing script for com­pi­la­tion:

I con­verted it into a sim­ple Makefile to run the com­pi­la­tion com­mands. I tried a bunch of sim­ple C pro­grams (basic arith­metic, read­ing and writ­ing to files) on Linux+Windows (compiled on Linux), and all of them worked.

I de­cided to try com­pil­ing a high-level lan­guage built on C. I orig­i­nally picked Python, but the Makefile for Python seemed too com­pli­cated to mess with, so I then picked Lua, which looked much sim­pler in com­par­i­son.

I started out by blindly copy-past­ing the flags and in­cludes used in the sam­ple com­pi­la­tion on Github. Ah, it would have been won­der­ful for my lazi­ness if it com­piled out of the box. Following is a play-by-play com­men­tary of try­ing to com­pile Lua.

First prob­lem I ran into was header clashes: if I did­n’t put -nostdlib -nostdinc while com­pil­ing each ob­ject file, -include cos­mopoli­tan.h would clash with the sys­tem head­ers. But block­ing the sys­tem head­ers meant I would have to change every #include of a sys­tem header. I cre­ated a bunch of dummy head­ers with the same names as those in the C stdlib and and in­cluded to those in­stead.

Naming clashes: some of the macros in cos­mopoli­tan.h clashed with macro/​func­tion names in Lua: re­verse and isempty. I changed the Lua source to avoid this.

A macro FIRST_RESERVED was bro­ken be­cause UCHAR_MAX was miss­ing. I thought UCHAR_MAX was sup­posed to be in lim­its.h — the lim­its.h part of cos­mopoli­tan.h did not have UCHAR_MAX (It had SCHAR_MAX, though.) I added in a #define stat­ing UCHAR_MAX as __UINT8_MAX__ (ie 255).

The de­fault Lua Makefile at­tempts to use _setjmp/_longjmp in ldo.c when on Linux. I dis­abled the LUA_USE_LINUX flag for com­pil­ing the ob­ject files, but this caused an is­sue with tmp­nam in loslib.c (mkstemp is avail­able in Cosmopolitan). I changed the Lua source to use setjmp/​longjmp. A sim­i­lar is­sue showed in lauxlib.c for sys/​wait.h (which is a no-op in non-POSIX sys­tems, as per the Lua source code), and in li­olib for sys/​types.h so dis­abled LUA_USE_POSIX over there as well.

The lo­cale­conv() func­tion (part of lo­cale.h) was not im­ple­mented in cos­mopoli­tan.h, and this caused an er­ror while com­pil­ing lob­ject.c (macro lu­a_get­lo­caledec­point() de­pended on lo­cale­conv()). Changed the macro to just re­turn .’.

The panic func­tion in Lua sta­tic int panic (lua_state*) clashed with that in Cosmopolitan void panic(void). Renamed the lua func­tion to lu­a_­panic. This trig­gered an er­ror where the panic func­tion was be­ing called in lu­aL_new­state, so I changed the name there as well.

lu­aL_load­filex caused a frame size er­ror — I have never seen this be­fore. A quick in­ter­net search shows that this is be­cause a large buffer is al­lo­cated on stack when en­ter­ing the func­tion, and yes, lu­aL_load­filex al­lo­cates a loadF ob­ject con­tain­ing a char buffer of BUFSIZ. I re­duced the size of the buffer to BUFSIZ - 64.

loslib.c reuiqres the set­lo­cale() and LC_* from lo­cale.h, which is de­fined as an ex­tern value in cos­mopoli­tan.h, but that de­f­i­n­i­tion is some­how not enough.. screw it, I just dis­abled os­_set­lo­cale in loslib.c, and then it com­piles.

I for­got, I should­n’t -lm or -ldl. Ok, let’s try with all the ob­ject files in­stead of li­blua.a:

Umm… okay, it looks like some of the func­tions de­fined in the cos­mopoli­tan header are yet to be im­ple­mented in the sta­tic li­brary. That’s okay, I can just quickly fill in the math func­tions, and I’ll com­ment out str­coll for now, just be­cause I want to see it com­pile…. and it suc­cess­fully com­piles!! Let’s run ob­j­copy be­fore try­ing it out on a sys­tem though.

That size re­duc­tion seems a lit­tle too dras­tic, but let’s see if it runs on Linux:

This is pretty in­cred­i­ble: I just had to mod­ify a few lines in a Makefile and some C source files, and I got a Lua ex­e­cutable that works both on Linux and Windows (and pos­si­bly oth­ers as well). Granted, there are still some de­tails to be filled out (floating point cal­cu­la­tion above prints a g), but Cosmopolitan is cur­rently at re­lease 0.0.2, so there is a lot of time.

Hopefully this means that other lan­guages that have source code com­pletely in C can also be com­piled once and run any­where. Actually Portable Python next, maybe?

...

Read the original on ahgamut.github.io »

3 412 shares, 17 trendiness, words and minutes reading time

In Defense of Dumb TVs

Smart TV was once a term re­served for high end tele­vi­sions with built-in stream­ing ca­pa­bil­i­ties.  The com­bi­na­tion of mas­sive re­duc­tions in panel costs, de­creas­ing costs for em­bed­ded com­pute, and the ready avail­abil­ity of con­tent plat­forms from Google, Roku, and oth­ers has made the term ir­rel­e­vant.  Almost every TV you can buy to­day has smarts built-in.  There have been some fan­tas­tic out­comes of that, like break­ing up the tra­di­tional chan­nel bun­dle and in­creas­ing ac­cess to more per­son­al­ized and niche con­tent.

There have been some se­ri­ous neg­a­tives too.  Decreasing prices and de­creas­ing mar­gins on TVs com­bined with long re­place­ment cy­cles have dri­ven com­pa­nies to take ad­van­tage of built-in smarts to en­able a new rev­enue source: user data and ad­ver­tis­ing.  As of Q2 2020, Vizio and HiSense are the only ma­jor brands mak­ing TVs that ship with­out ad­ver­tis­ing en­abled in their UIs.  Sony, Samsung, LG, and oth­ers have ads en­abled by de­fault, most of which can’t be dis­abled.  All of the above brands have built ca­pa­bil­ity to ag­gre­gate data on what con­tent is be­ing viewed, and again, not all of them have the op­tion to dis­able that.  TVs smart enough to help you are also smart enough to harm you.  Incredibly, Samsung even rec­om­mends that you run virus and mal­ware check­ing on your TV reg­u­larly.

An ob­vi­ous way out of this as a con­sumer is to buy a TV with­out smarts built in (a dumb TV) and then add your own con­tent source that is pri­vacy fo­cused like Apple TV or that you have full con­trol over like Kodi.  This is some­thing we per­son­ally looked for when we were buy­ing a dis­play for the con­fer­ence room at Framework’s head­quar­ters.  Amazingly enough though, we found that none of the ma­jor con­sumer TV brands make ba­sic dumb” dis­plays any­more.  There are op­tions in the com­mer­cial space like NECs com­mer­cial dis­plays, but they cost sub­stan­tially more than the con­sumer-fo­cused al­ter­na­tives.

We nearly gave in and bought a typ­i­cal smart TV, and then we stum­bled on Sceptre’s TV lineup.  You’ll no­tice that they have a range of ex­tremely sim­i­lar look­ing sets that have mi­nor spec­i­fi­ca­tion and weight dif­fer­ences.  Our best guess is that they source LCDs from panel man­u­fac­tur­ers that are ei­ther ex­cess stock or fail the qual­ity spec­i­fi­ca­tions set by other brands and build ex­tremely min­i­mal TVs around them.  We haven’t no­ticed any qual­ity is­sues on our Sceptre set, but for our use case of show­ing slides and spread­sheets, it would­n’t have mat­tered any­way.  The prod­uct was per­fect for us: a dumb TV that as an added bonus re­duces e-waste by us­ing pan­els that would oth­er­wise be scrapped.

It’s an in­ter­est­ing busi­ness model, and one that is con­sumer friendly, en­vi­ron­men­tally con­sid­er­ate, and eco­nom­i­cally sound.  That is a pow­er­ful com­bi­na­tion that we need to see across all of con­sumer elec­tron­ics.

...

Read the original on frame.work »

4 369 shares, 44 trendiness, words and minutes reading time

The Mars Helicopter is Online and Getting Ready to Fly

Earth is the only planet in the so­lar sys­tem with air­craft ca­pa­ble of sus­tained flight. Suppose the ground-break­ing Ingenuity he­li­copter, cur­rently stowed aboard the sim­i­larly spec­tac­u­lar Mars Perseverance rover, ac­com­plishes its planned mis­sion. In that case, Mars will be­come the sec­ond planet to have a pow­ered air­craft fly through its at­mos­phere.

Ingenuity has sent its first sta­tus re­port since land­ing on Mars. The sig­nal, which ar­rived via the iconic Mars Reconnaissance Orbiter (MRO), re­ports on the state of the bat­ter­ies of the he­li­copter as well as the op­er­a­tion of the base sta­tion, which, among other things, op­er­ates the crit­i­cally im­por­tant heaters that keep the elec­tron­ics within an ac­cept­able tem­per­a­ture range. Thankfully, it’s all good news for now, with the bat­ter­ies and base sta­tion op­er­at­ing as ex­pected.

While Ingenuity still has­n’t per­formed a flight yet (hopefully, this be­comes an out­dated state­ment soon), the he­li­copter has al­ready over­come some daunt­ing chal­lenges. Perhaps the most per­ilous por­tion of Ingenuity’s jour­ney was the in­ter­plan­e­tary trip from Earth to Mars as part of the larger Perseverance rover mis­sion. Launched in July of 2020, Perseverance touched down at Jezero Crater on Mars on February 18th. A new high-res­o­lu­tion video of the spec­tac­u­lar sky-crane land­ing of Perseverance was re­leased by NASA ear­lier to­day and is mind-blow­ing all on its own.

It is easy to over­look how chal­leng­ing land­ing on Mars is. The alarm­ing fact of the mat­ter is that only about half of Mars mis­sions have made it suc­cess­fully! One of the main rea­sons for this is the den­sity of the Martian at­mos­phere. Thankfully, rid­ing strapped to the belly of the rover, Ingenuity sur­vived the per­ilous de­scent from space.

One of the most sig­nif­i­cant ob­sta­cles for land­ing on Mars will con­tinue to pre­sent prob­lems for our heroic he­li­copter now that it is safely on the sur­face. The at­mos­pheric pres­sure on the sur­face of Mars is only about 1% that of Earth. To put that in per­spec­tive, the sum­mit of Mount Everest has only one-third the at­mos­pheric pres­sure of sea level. While this is thought to be at (or sadly in some cases be­yond) the limit of what hu­mans can sur­vive, it is well be­yond Earthbound he­li­copters’ range. If you’ve ever won­dered why wealthy ex­plorer-types don’t just cheat and take a he­li­copter to the sum­mit of Everest, that’s why!

Compared to Mars, the air on Everest might as well be pea soup. The ridicu­lously rar­efied air on Mars makes he­li­copter flight ex­tra­or­di­nar­ily chal­leng­ing. Ingenuity will spin its two counter-ro­tat­ing ro­tors five times faster than Earthly he­li­copters, about forty times per sec­ond. Ingenuity is also light, only about 1.8 kilo­grams. The ro­tors are about 1.2 me­ters in di­am­e­ter and are rel­a­tively over­sized to max­i­mize lift.

Mars does give Ingenuity a break in one area, thank­fully. Mars has only about one-third of the sur­face grav­ity of the Earth. If you were to hold the air­craft while stand­ing on the Earth, it would feel roughly as heavy as a two-liter bot­tle (with a cou­ple of sips taken out for luck). On Mars, the ex­act same air­craft would feel like a 20 oz bot­tle (591 mil­li­liters).

The he­li­copter does not play a crit­i­cal role in the sci­ence mis­sion of Perseverance. It is es­sen­tially a tech­no­log­i­cal demon­stra­tion or proof-of-con­cept, and data col­lected from Ingenuity will be used in en­gi­neer­ing fu­ture Martian air­craft. It is so­lar-pow­ered and fea­tures elec­tron­ics that have been minia­tur­ized to keep every­thing light enough for flight.

Ingenuity is also fully au­tonomous. Due to the ex­treme dis­tance of Mars, the he­li­copter mis­sion con­trollers can’t fly the air­craft in re­al­time the way Earthly drone-pi­lots can use joy­sticks to ma­neu­ver at home. The time it takes for a sig­nal to travel from Earth to Mars is longer than the he­li­copter’s en­tire flight-time! Imagine if you were dri­ving a car (on a closed course in your imag­i­na­tion only), and when you turned the wheel, the car ran out of gas be­fore it reg­is­tered your in­put!

If Ingenuity suc­cess­fully demon­strates pow­ered aero­dy­namic flight on Mars, it will be a mile­stone un­like any that has come be­fore. One can only imag­ine the im­pact that fly­ing Mars ex­plor­ers could have on fu­ture mis­sions. A fu­ture he­li­copter could be part­nered with a larger rover and act as a scout, care­fully sur­vey­ing the ter­rain and help­ing the par­ent rover more ef­fi­ciently plot a safe and sci­en­tif­i­cally in­ter­est­ing course. Perhaps a he­li­copter could pick up sam­ples from a wide area and de­liver them to a rover or sta­tion­ary fa­cil­ity with highly so­phis­ti­cated sci­en­tific in­stru­men­ta­tion. Even a stand­alone he­li­copter mis­sion could be con­ceived. There are plenty of cliffs, ice caps, vol­ca­noes, or oth­er­wise in­ac­ces­si­ble parts of the Martian land­scape that are likely per­ma­nently be­yond the reach of ground-based rovers or even hu­mans.

The com­ing weeks will be one of the most ex­cit­ing pe­ri­ods for fans of space ex­plo­ration, avi­a­tion, or any­body who is stirred by ex­tra­or­di­nary ac­com­plish­ments in en­gi­neer­ing and, of course, the in­ge­nu­ity of the bril­liant sci­en­tists and en­gi­neers that built Ingenuity.

...

Read the original on www.universetoday.com »

5 332 shares, 23 trendiness, words and minutes reading time

Sheryl Sandberg and Top Facebook Execs Silenced an Enemy of Turkey to Prevent a Hit to the Company’s Business

People look on as smoke rises on the Syrian side of the bor­der in Hassa, south­ern Turkey, on Jan. 28, 2018, when Turkish jet fight­ers hit People’s Protection Units (YPG) po­si­tions. Superimposed is the re­ply from Sheryl Sandberg in ref­er­ence to block­ing the YPG Facebook page.

Photo il­lus­tra­tion by ProPublica, photo by Ozan Koze/AFP via Getty Images

ProPublica is a non­profit news­room that in­ves­ti­gates abuses of power. Sign up to re­ceive our biggest sto­ries as soon as they’re pub­lished.

As Turkey launched a mil­i­tary of­fen­sive against Kurdish mi­nori­ties in neigh­bor­ing Syria in early 2018, Facebook’s top ex­ec­u­tives faced a po­lit­i­cal dilemma.

Turkey was de­mand­ing the so­cial me­dia gi­ant block Facebook posts from the People’s Protection Units, a mostly Kurdish mili­tia group the Turkish gov­ern­ment had tar­geted. Should Facebook ig­nore the re­quest, as it has done else­where, and risk los­ing ac­cess to tens of mil­lions of users in Turkey? Or should it si­lence the group, known as the YPG, even if do­ing so added to the per­cep­tion that the com­pany too of­ten bends to the wishes of au­thor­i­tar­ian gov­ern­ments?

Thanks for sign­ing up. If you like our sto­ries, mind shar­ing this with a friend?

Copy link

For more ways to keep up, be sure to check out the rest of our newslet­ters.

See All

Fact-based, in­de­pen­dent jour­nal­ism is needed now more than ever.

Donate

It was­n’t a par­tic­u­larly close call for the com­pa­ny’s lead­er­ship, newly dis­closed emails show.

I am fine with this,” wrote Sheryl Sandberg, Facebook’s No. 2 ex­ec­u­tive, in a one-sen­tence mes­sage to a team that re­viewed the page. Three years later, YPGs pho­tos and up­dates about the Turkish mil­i­tary’s bru­tal at­tacks on the Kurdish mi­nor­ity in Syria still can’t be viewed by Facebook users in­side Turkey.

The con­ver­sa­tions, among other in­ter­nal emails ob­tained by ProPublica, pro­vide an un­usu­ally di­rect look into how tech gi­ants like Facebook han­dle cen­sor­ship re­quests made by gov­ern­ments that rou­tinely limit what can be said pub­licly. When the Turkish gov­ern­ment at­tacked the Kurds in the Afrin District of north­ern Syria, Turkey also ar­rested hun­dreds of its own res­i­dents for crit­i­ciz­ing the op­er­a­tion.

Publicly, Facebook has un­der­scored that it cher­ishes free speech: We be­lieve free­dom of ex­pres­sion is a fun­da­men­tal hu­man right, and we work hard to pro­tect and de­fend these val­ues around the world,” the com­pany wrote in a blog post last month about a new Turkish law re­quir­ing that so­cial me­dia firms have a le­gal pres­ence in the coun­try. More than half of the peo­ple in Turkey rely on Facebook to stay in touch with their friends and fam­ily, to ex­press their opin­ions and grow their busi­nesses.”

But be­hind the scenes in 2018, amid Turkey’s mil­i­tary cam­paign, Facebook ul­ti­mately sided with the gov­ern­men­t’s de­mands. Deliberations, the emails show, were cen­tered on keep­ing the plat­form op­er­a­tional, not on hu­man rights. The page caused us a few PR fires in the past,” one Facebook man­ager warned of the YPG ma­te­r­ial.

The Turkish gov­ern­men­t’s lob­by­ing on Afrin-related con­tent in­cluded a call from the chair­man of the BTK, Turkey’s telecom­mu­ni­ca­tions reg­u­la­tor. He re­minded Facebook to be cau­tious about the ma­te­r­ial be­ing posted, es­pe­cially pho­tos of wounded peo­ple,” wrote Mark Smith, a U. K.-based pol­icy man­ager, to Joel Kaplan, Facebook’s vice pres­i­dent of global pub­lic pol­icy. He also high­lighted that the gov­ern­ment may ask us to block en­tire pages and pro­files if they be­come a fo­cal point for shar­ing il­le­gal con­tent.” (Turkey con­sid­ers the YPG a ter­ror­ist or­ga­ni­za­tion, al­though nei­ther the U.S. nor Facebook do.)

The com­pa­ny’s even­tual so­lu­tion was to geo-block,” or se­lec­tively ban users in a ge­o­graphic area from view­ing cer­tain con­tent, should the threats from Turkish of­fi­cials es­ca­late. Facebook had pre­vi­ously avoided the prac­tice, even though it has be­come in­creas­ingly pop­u­lar among gov­ern­ments that want to hide posts from within their bor­ders.

Facebook con­firmed to ProPublica that it made the de­ci­sion to re­strict the page in Turkey fol­low­ing a le­gal or­der from the Turkish gov­ern­ment — and af­ter it be­came clear that fail­ing to do so would have led to its ser­vices in the coun­try be­ing com­pletely shut down. The com­pany said it had been blocked be­fore in Turkey, in­clud­ing a half-dozen times in 2016.

The con­tent that Turkey deemed of­fen­sive, ac­cord­ing to in­ter­nal emails, in­cluded pho­tos on Facebook-owned Instagram of wounded YPG fight­ers, Turkish sol­diers and pos­si­bly civil­ians.” At the time, the YPG slammed what it un­der­stood to be Facebook’s cen­sor­ship of such ma­te­r­ial. Silencing the voice of democ­racy: In light of the Afrin in­va­sion, YPG ex­pe­ri­ence se­vere cy­ber­at­tacks.” The group has pub­lished graphic im­ages, in­clud­ing pho­tos of mor­tally wounded fight­ers; this is the way NATO ally Turkey se­cures its bor­ders,” YPG wrote in one post.

Facebook spokesman Andy Stone pro­vided a writ­ten state­ment in re­sponse to ques­tions from ProPublica.

We strive to pre­serve voice for the great­est num­ber of peo­ple,” the state­ment said. There are, how­ever, times when we re­strict con­tent based on lo­cal law even if it does not vi­o­late our com­mu­nity stan­dards. In this case, we made the de­ci­sion based on our poli­cies con­cern­ing gov­ern­ment re­quests to re­strict con­tent and our in­ter­na­tional hu­man rights com­mit­ments. We dis­close the con­tent we re­strict in our twice-yearly trans­parency re­ports and are eval­u­ated by in­de­pen­dent ex­perts on our in­ter­na­tional hu­man rights com­mit­ments every two years.”

The Turkish em­bassy in Washington said it con­tends the YPG is the Syrian off­shoot” of the Kurdistan Workers’ Party, or PKK, which the U. S. gov­ern­ment con­sid­ers to be a ter­ror­ist or­ga­ni­za­tion.

Facebook has con­sid­ered the YPG page po­lit­i­cally sen­si­tive since at least 2015, emails show, when of­fi­cials dis­cov­ered the page was in­ac­cu­rately marked as ver­i­fied with a blue check mark. In turn, that cre­ated neg­a­tive cov­er­age on Turkish pro-gov­ern­ment me­dia,” one ex­ec­u­tive wrote. When Facebook re­moved the check mark, it in turn created neg­a­tive cov­er­age [in] English lan­guage me­dia in­clud­ing on Huffington Post.”

In 2018, the re­view team, which in­cluded global pol­icy chief Monika Bickert, laid out the con­se­quences of a ban. The com­pany could set a bad ex­am­ple for fu­ture cases and take flak for its de­ci­sion. Geo-blocking the YPG is not with­out risk — ac­tivists out­side of Turkey will likely no­tice our ac­tions, and our de­ci­sion may draw un­wanted at­ten­tion to our over­all geo-block­ing pol­icy,” said one email in late January.

But this time, the team mem­bers said, the par­ties were em­broiled in an armed con­flict and Facebook of­fi­cials wor­ried their plat­form could be shut down en­tirely in Turkey. We are in fa­vor of geo-block­ing the YPG con­tent,” they wrote, if the prospects of a full-ser­vice block­age are great.” They pre­pared a reactive” press state­ment: We re­ceived a valid court or­der from the au­thor­i­ties in Turkey re­quir­ing us to re­strict ac­cess to cer­tain con­tent. Following care­ful re­view, we have com­plied with the or­der,” it said.

In a nine-page rul­ing by Ankara’s 2nd Criminal Judgeship of Peace, gov­ern­ment of­fi­cials listed YPGs Facebook page among sev­eral hun­dred so­cial me­dia URLs they con­sid­ered prob­lem­atic. The court wrote that the sites should be blocked to protect the right to life or se­cu­rity of life and prop­erty, en­sure na­tional se­cu­rity, pro­tect pub­lic or­der, pre­vent crimes, or pro­tect pub­lic health,” ac­cord­ing to a copy of the or­der ob­tained by ProPublica.

Kaplan, in a Jan. 26, 2018, email to Sandberg and Facebook CEO Mark Zuckerberg, con­firmed that the com­pany had re­ceived a Turkish gov­ern­ment or­der de­mand­ing that the page be cen­sored, al­though it was­n’t im­me­di­ately clear if of­fi­cials were re­fer­ring to the Ankara court rul­ing. Kaplan ad­vised the com­pany to immediately geo-block the page” should Turkey threaten to block all ac­cess to Facebook.

Sandberg, in a re­ply to Kaplan, Zuckerberg and oth­ers, agreed. (She had been at the World Economic Forum in Davos, Switzerland, tout­ing Facebook’s role in as­sist­ing vic­tims of nat­ural dis­as­ters.)

Facebook can’t bow to au­thor­i­tar­i­ans to sup­press po­lit­i­cal dis­si­dents and then claim to be just following le­gal or­ders,’” said Sen. Ron Wyden, an Oregon Democrat who’s a promi­nent Facebook critic. American com­pa­nies need to stand up for uni­ver­sal hu­man rights, not just hunt for big­ger prof­its. Mark Zuckerberg has called for big changes to U. S. laws pro­tect­ing free speech at the same time he pro­tected far-right slime mer­chants in the U.S. and cen­sored dis­si­dents in Turkey. His pri­or­ity has been pro­tect­ing the pow­er­ful and Facebook’s bot­tom line, even if it means mar­gin­al­ized groups pay the price.”

In a state­ment to ProPublica, the YPG said cen­sor­ship by Facebook and other so­cial me­dia plat­forms is on an ex­treme level.”

YPG has ac­tively been us­ing so­cial me­dia plat­forms like Facebook, Twitter, YouTube, Instagram and oth­ers since its foun­da­tion,” the group said. YPG uses so­cial me­dia to pro­mote its strug­gle against ji­hadists and other ex­trem­ists who at­tacked and are at­tack­ing Syrian Kurdistan and north­ern Syria. Those plat­form[s] have a cru­cial role in build­ing a pub­lic pres­ence and eas­ily reach­ing com­mu­ni­ties across the world. However, we have faced many chal­lenges on so­cial me­dia dur­ing these years.”

Cutting off rev­enue from Turkey could harm Facebook fi­nan­cially, reg­u­la­tory fil­ings sug­gest. Facebook in­cludes rev­enue from Turkey and Russia in the fig­ure it gives for Europe over­all and the com­pany re­ported a 34% in­crease for the con­ti­nent in an­nual rev­enue per user, ac­cord­ing to its 2019 an­nual re­port to the U. S. Securities and Exchange Commission.

Yaman Akdeniz, a founder of the Turkish Freedom of Expression Association, said the YPG block was not an easy case be­cause Turkey sees the YPG as a ter­ror or­ga­ni­za­tion and wants their ac­counts to be blocked from Turkey. But it just con­firms that Facebook does­n’t want to chal­lenge these re­quests, and it was pre­pared to act.”

Facebook has a trans­parency prob­lem,” he said.

In fact, Facebook does­n’t re­veal to users that the YPG page is ex­plic­itly banned. When ProPublica tried to ac­cess YPGs Facebook page us­ing a Turkish VPN — to sim­u­late brows­ing the in­ter­net from in­side the coun­try — a no­tice read: The link may be bro­ken, or the page may have been re­moved.” The page is still avail­able on Facebook to peo­ple who view the site through U. S. in­ter­net providers.

How the Police Bank Millions Through Their Union Contracts

For its part, Facebook re­ported about 15,300 gov­ern­ment re­quests world­wide for con­tent re­stric­tions dur­ing the first half of 2018. Roughly 1,600 came from Turkey dur­ing that pe­riod, com­pany data shows, ac­count­ing for about 10% of re­quests glob­ally. In a brief post, Facebook said it re­stricted ac­cess to 1,106 items in re­sponse to re­quests from Turkey’s tele­com reg­u­la­tor, the courts and other agen­cies, which cov­ers a range of of­fenses in­clud­ing per­sonal rights vi­o­la­tions, per­sonal pri­vacy, defama­tion of [first Turkish pres­i­dent Mustafa Kemal] Ataturk, and laws on the unau­tho­rized sale of reg­u­lated goods.”

Katitza Rodriguez, pol­icy di­rec­tor for global pri­vacy at the Electronic Frontier Foundation, said the Turkish gov­ern­ment has also man­aged to force Facebook and other plat­forms into ap­point­ing le­gal rep­re­sen­ta­tives in the coun­try. If tech com­pa­nies don’t com­ply, she said, Turkish tax­pay­ers would be pre­vented from plac­ing ads and mak­ing pay­ments to Facebook. Because Facebook is a mem­ber of the Global Network Initiative, Rodriguez said, it has pledged to up­hold the group’s hu­man rights prin­ci­ples.

Companies have an oblig­a­tion un­der in­ter­na­tional hu­man rights law to re­spect hu­man rights,” she said.

Do you have ac­cess to in­for­ma­tion about Facebook that should be pub­lic? Email [email protected]. Here’s how to send tips and doc­u­ments to ProPublica se­curely.

...

Read the original on www.propublica.org »

6 292 shares, 14 trendiness, words and minutes reading time

a new chapter

For many peo­ple, the year 2020 will go down as a mo­ment in time of hard­ship in their lives but for me, the year 2019 was dra­mat­i­cally harder as it was the re­al­iza­tion that a long-term re­la­tion­ship was­n’t go­ing to work out and that every­one, in­clud­ing my chil­dren, would be home­less within 14 days. Oomph.

I packed all of my arte­facts and of­fice equip­ment into a u-haul and left every­thing else, in­clud­ing the fam­ily car to my now ex-wife. To cut a long story short the cost of hous­ing in Sydney, Australia and sole-in­come was a lead­ing fac­tor in the dis­so­lu­tion of the re­la­tion­ship.

I’m do­ing ex­cel­lent, now, but back then the en­tire ex­pe­ri­ence left me shat­tered, soul de­stroyed and burnt out be­yond be­lief. Luckily an el-rando, now friend, saw that I was in need and reached out:

I just want to say, thank-you. I needed that.

Dear reader, if you ever find your­self in cri­sis or a sit­u­a­tion sim­i­lar to mine please do not hes­i­tate to con­tact me if you need a shoul­der or ad­vice. You de­serve to live a life that makes you happy and not mis­er­able.

People of­ten ask me what got you in­ter­ested in #vanlife, now you know. It was­n’t an in­ter­est as-per-say but more of a ne­ces­sity and a key in­gre­di­ent to en­sur­ing my chil­dren would grow up with a fa­ther in their lives.

I first heard about the con­cept of #vanlife back in 2015 af­ter read­ing this ar­ti­cle about a Google em­ployee who lived in a box-truck in the com­pa­ny’s park­ing lot:

The events that fol­lowed the Stradbroke camp­ing trip put me on a path to­wards think­ing about okay, what-next and how will I get there? Fast-forward a cou­ple of months later I found my­self on my own month-long camp­ing trip down in Tasmania where I learned some im­por­tant life lessons, bushcraft skills and met some in­ter­est­ing char­ac­ters, some of whom were liv­ing #vanlife.

I was amazed at the qual­ity of life these peo­ple had, how they were able to make ends meet and their re­source­ful­ness. They also knew how to have a freak­ing good time…

Over the course of a month, a small group of peo­ple camp­ing on crown land grew to over 500 peo­ple. Love was free and it was cus­tom to hug every­one. Meanwhile, in a par­al­lel uni­verse, this was hap­pen­ing…

With the new-found knowl­edge and ex­pe­ri­ences, I flew back to sunny Queensland, Australia and started my re­search. After watch­ing hun­dreds of hours of videos on YouTube I found this video, which to this day, rep­re­sents my north star:

Over the months that fol­lowed through 2020, my fa­ther and mother helped make the dream a re­al­ity. I never wanted to live in Sydney, Australia ever again, yet, much to my dis­may, my chil­dren would re­main liv­ing in a city that makes no sense. Putting to­gether a van was a ma­jor step in get­ting my life back on track and en­sur­ing that my chil­dren would know their fa­ther.

It’s now 2021, I’m do­ing bet­ter than I was in the re­la­tion­ship and my chil­dren are hav­ing more qual­ity time with their fa­ther than they have ever had. Over the last 12 months dur­ing var­i­ous stages of the van kit-out, my kids and I have been go­ing away on overnight, week­end and school hol­i­day ad­ven­tures where they get to ex­pe­ri­ence the sights and sounds of Australia.

On the re­la­tion­ship front, I’m pleased to share that there has been min­i­mal con­flict and day-by-day we each learn to love and re­spect each other as the par­ents of our chil­dren. A large part of that is due to some early ad­vice I re­ceived from a man who had also re­cently di­vorced:

To that per­son, thank you for shar­ing your per­spec­tives and ad­vice on how to en­sure that, un­like you see in the movies, co-par­ent­ing does not need to be ad­ver­sar­ial.

So, in sum­mary - every­thing has been go­ing well but there was still one thing left to re­solve:

✔️ hous­ing.

✔️ con­flict free co-par­ent­ing.

✔️ chil­dren hav­ing qual­ity time with their fa­ther.

🚧 mean­ing­ful work

Approx 21% to 35% of our time awake is spent at work and every two years that goes by at a com­pany you don’t like (or un­em­ployed) is wast­ing 2% of your work­ing life. Time is valu­able and there are no guar­an­tees that to­mor­row will come thus it is ex­ceedinly im­por­tant for hap­pi­ness that you find work that is mean­ing­ful and that treats you well.

Work keeps us busy.  It gives us struc­ture, it de­fines us as func­tion­ing, con­tribut­ing, worth­while cit­i­zens.  It makes us part of the team, a com­mu­nity of fel­low work­ers — even if we do our work re­motely in iso­la­tion.

Pause for a mo­ment and think about this…

After what I had been through in 2019 I spent a con­sid­er­able amount of time in 2020 think­ing about the above ques­tion.

The clock rolled over to 2021 and for­tu­nately I have an­swers to the above ques­tion, a kick-ass van but what good is a van if you can’t go on ad­ven­tures and work re­motely from it in­def­i­nitely?

So any­way, I’ve got some news…

The founders of Gitpod re­cently ap­proached me about join­ing them and it was a no-brainer. I have joined Gitpod. 🎉

The com­pany is re­mote-first (✔️), em­braces asyn­crony (✔️), filled with bril­liant peo­ple (✔️), the prod­uct is open-source (✔️) and built on GitHub (✔️) us­ing the light weight Visual Studio code process (✔️).

Gitpod has been a mean­ing­ful and key piece of soft­ware in my toolkit over the last cou­ple years be­cause Gitpod en­ables me to de­velop on any de­vice from any­where. I can hop be­tween pull-re­quests with a sin­gle click and there’s no wait­ing be­cause the pull-re­quest has al­ready been pre-com­piled.

Gitpod has en­abled me to stan­dard­ize de­vel­op­ment en­vi­ron­ments be­tween peo­ple and, like docker, move to ephemeral in­stances so that I never ex­pe­ri­ence but it works on my ma­chine” ever again.

I could talk for hours on end on how mean­ing­ful the work Gitpod is do­ing and how it makes it eas­ier for open-source main­tain­ers to on­board new con­trib­u­tors to their pro­ject but it’s bet­ter that you just ex­pe­ri­ence it for your­self.

So, that’s the news. Things are do­ing re­ally well. I want to say thank-you for those whom have sup­ported me over the last cou­ple years whilst I fig­ured this all out.

This chap­ter is only just be­gin­ning. I’m blog­ging more and tweet­ing less so if you want to learn about sweet places to visit in Australia, work­ing re­motely from a van or how I get in­ter­net whilst camp­ing out in a re­mote for­est, like, sub­scribe (free) and en­ter your email ad­dress to be no­ti­fied when fu­ture blog posts ship.

...

Read the original on ghuntley.com »

7 279 shares, 12 trendiness, words and minutes reading time

Fightin' Words

Explaining how fight­ing games use de­lay-based and roll­back net­code

I would like to thank krazhier and Keits for tak­ing hours out of their busy sched­ules to dis­cuss tech­ni­cal as­pects of net­code with me, and Sajam for tak­ing time to an­swer in­ter­view ques­tions and be­ing sup­port­ive through­out the writ­ing process. I would also like to es­pe­cially thank MagicMoste for mak­ing all the won­der­ful videos you see in this ar­ti­cle. All their help was of­fered for free and I am thank­ful for their friend­ship.

This ar­ti­cle has been cross-posted on Ars Technica.

You may also en­joy watch­ing a video fea­ture on the top­ics in this ar­ti­cle.

Welcome back to Fightin’ Words! It’s been a while since we last dis­cussed how the most fa­mous fight­ing game bugs have im­pacted the com­mu­ni­ty’s fa­vorite games. Today’s topic is a bit more tech­ni­cal, but it’s an equally im­por­tant fac­tor in how our fa­vorite mod­ern games are played — we’re go­ing to be do­ing a deep dive into net­code.

At its core, net­code is sim­ply a method for two or more com­put­ers, each try­ing to play the same game, to talk to each other over the in­ter­net. While lo­cal play al­ways en­sures that all player in­puts ar­rive and are processed at the same time, net­works are con­stantly un­sta­ble in ways the game can­not con­trol or pre­dict. Information sent to your op­po­nent may be de­layed, ar­rive out of or­der, or be­come lost en­tirely de­pend­ing on dozens of fac­tors, in­clud­ing the phys­i­cal dis­tance to your op­po­nent, if you’re on a WiFi con­nec­tion, and whether your room­mate is watch­ing Netflix. Online play in games is noth­ing new, but fight­ing games have their own set of unique chal­lenges. They tend to in­volve di­rect con­nec­tions to other play­ers, un­like many other pop­u­lar game gen­res, and low, con­sis­tent la­tency is ex­tremely im­por­tant be­cause mus­cle mem­ory and re­ac­tions are at the core of vir­tu­ally every fight­ing game. As a re­sult, two promi­nent strate­gies have emerged for play­ing fight­ing games on­line: de­lay-based net­code and roll­back net­code.

There’s been a re­newed pas­sion in the fight­ing game com­mu­nity that roll­back is the best choice, and fight­ing game de­vel­op­ers who choose to use de­lay-based net­code are pre­vent­ing the growth of the genre. While peo­ple have been pas­sion­ate about this topic for many years, frus­tra­tions con­tinue to rise as new, oth­er­wise ex­cel­lent games re­peat­edly have bad on­line ex­pe­ri­ences.

There are rel­a­tively few easy-to-fol­low ex­pla­na­tions for what ex­actly roll­back net­code is, how it works, and why it is so good at hid­ing the ef­fects of bad con­nec­tions (though there are some). Because I feel this topic is ex­tremely im­por­tant for the fu­ture health of the fight­ing game com­mu­nity, I want to help squash some mis­con­cep­tions about net­code and ex­plain both net­code strate­gies thor­oughly so every­one can be in­formed as they dis­cuss. If you stick around to the end, I’ll even in­ter­view some in­dus­try ex­perts and com­mu­nity lead­ers on the topic! Before we dig into the de­tails, though, let’s get one thing straight.

Both com­pa­nies and play­ers should care about good net­code be­cause play­ing on­line is no longer the fu­ture — it’s the pre­sent. While most other video game gen­res have been this way for a decade or longer, fight­ing game de­vel­op­ers seem to be re­sis­tant to em­brac­ing on­line play, per­haps be­cause of the gen­re’s roots in of­fline set­tings such as ar­cades and tour­na­ments. Playing of­fline is great, and it will al­ways have con­sid­er­able value in fight­ing games, but it’s sim­ply a re­al­ity that a large per­cent­age of the player base will never play of­fline. For many fight­ing game fans, play­ing on­line is the game, and a bad on­line ex­pe­ri­ence pre­vents them from get­ting bet­ter, play­ing or rec­om­mend­ing the game to their friends, and ul­ti­mately causes them to sim­ply go do some­thing else. Even if you think you have a good con­nec­tion, or live in an area of the world with ro­bust in­ter­net in­fra­struc­ture, good net­code is still manda­tory. Plus, lost or de­layed in­for­ma­tion hap­pens reg­u­larly even on the best net­works, and poor net­code can ac­tively ham­per matches no mat­ter how smooth the con­di­tions may be. Good net­code also has the ben­e­fit of con­nect­ing re­gions across greater dis­tances, ef­fec­tively unit­ing the global player base as much as pos­si­ble.

Bad net­code can ruin matches. This match, played on­line be­tween two Japanese play­ers, im­pacted who gets to at­tend the Capcom Pro Tour fi­nals. (source)

What about those who never play on­line be­cause they much pre­fer play­ing of­fline with their friends? The healthy ecosys­tem that good net­code cre­ates around a game ben­e­fits every­one. There will be more ac­tive play­ers, more chances to con­sume con­tent for your fa­vorite game — from tech videos to spec­tat­ing on­line tour­na­ments to ex­pand­ing the strat­egy of lesser-used char­ac­ters — and more ex­cite­ment sur­round­ing your game in the FGC. Despite Killer Instinct’s pedi­gree as an ex­cel­lent game, there’s no doubt that its su­perb roll­back net­code has played a huge part in the sus­tained growth of its com­mu­nity.

...

Read the original on ki.infil.net »

8 244 shares, 15 trendiness, words and minutes reading time

file is not a database · Issue #4513 · signalapp/Signal-Desktop

Have a ques­tion about this pro­ject? Sign up for a free GitHub ac­count to open an is­sue and con­tact its main­tain­ers and the com­mu­nity.

By click­ing Sign up for GitHub”, you agree to our terms of ser­vice and pri­vacy state­ment. We’ll oc­ca­sion­ally send you ac­count re­lated emails.

Already on GitHub? Sign in

to your ac­count

...

Read the original on github.com »

9 236 shares, 29 trendiness, words and minutes reading time

Weird architectures weren't supported to begin with

This post con­tains my own opin­ions, not the opin­ions of my em­ployer or any open source groups I be­long or con­tribute to.

It’s also been rewrit­ten times, and (I think) reads con­fus­ingly in places. But I promised my­self that I’d get it out of the door in­stead of con­tin­u­ing to sit on it, so here we go.

There’s been a de­cent amount of de­bate in the open source com­mu­nity about sup­port

re­cently, orig­i­nat­ing pri­mar­ily from

pyca/​cryp­tog­ra­phy’s de­ci­sion to use Rust for some ASN.1 pars­ing rou­tines.

To sum­ma­rize the sit­u­a­tion: build­ing the lat­est pyca/​cryp­tog­ra­phy re­lease from scratch now re­quires a Rust tool­chain. The only cur­rently Rust tool­chain is built on LLVM, which sup­ports a (relatively) lim­ited

set of ar­chi­tec­tures. Rust fur­ther whit­tles this set down into sup­port tiers, with some tar­gets not re­ceiv­ing au­to­mated test­ing (tier 2) or of­fi­cial builds (tier 3).

By con­trast, up­stream GCC sup­ports a some­what larger

set of ar­chi­tec­tures. But C, can­cer that it is, finds its way onto every ar­chi­tec­ture with or with­out GCC (or LLVMs) help, and thereby boot­straps every­thing

else.

Program pack­agers and dis­trib­u­tors (frequently sep­a­rate from pro­ject main­tain­ers them­selves) are very used to C’s uni­ver­sal pres­ence. They’re so used to it that they’ve built generic mech­a­nisms for putting en­tire dis­tri­b­u­tions onto new ar­chi­tec­tures with only a sin­gle as­sump­tion: the pres­ence of a ser­vice­able C com­piler.

This is the heart of the con­flict: Rust (and many other mod­ern, safe lan­guages) use LLVM for its rel­a­tive sim­plic­ity, but LLVM does not sup­port ei­ther na­tive or cross-com­pi­la­tion to many less pop­u­lar (read: niche) ar­chi­tec­tures. Package man­agers are in­creas­ingly find­ing that one of their old­est as­sump­tions can be eas­ily vi­o­lated, and they’re not happy about that.

But here’s the prob­lem: it’s a bad as­sump­tion. The fact that it’s the de­fault rep­re­sents an un­mit­i­gated se­cu­rity, re­li­a­bil­ity, and re­pro­ducibil­ity dis­as­ter.

Imagine, for a mo­ment, that you’re a main­tainer of a pop­u­lar pro­ject.

Everything has gone right for you: you have happy users, an ac­tive de­vel­op­ment base, and maybe even cor­po­rate spon­sors. You’ve also got a CI/CD pipeline that pro­duces canon­i­cal re­leases of your pro­ject on tested ar­chi­tec­tures; you treat any is­sues with uses of those re­leases as a bug in the pro­ject it­self, since you’ve taken re­spon­si­bil­ity for pack­ag­ing it.

Because your pro­ject is pop­u­lar, oth­ers also dis­trib­ute it: Linux dis­tri­b­u­tions, third-party pack­age man­agers, and cor­po­ra­tions seek­ing to de­ploy their own con­trolled builds. These oth­ers have slightly dif­fer­ent needs and se­tups and, to vary­ing de­grees, will:

* Build your pro­ject with slightly (or com­pletely) dif­fer­ent ver­sions of de­pen­den­cies

* Build your pro­ject with slightly (or com­pletely) dif­fer­ent op­ti­miza­tion flags and other po­ten­tially

ABI-breaking op­tions

* Distribute your pro­ject with in­se­cure or out­right bro­ken de­faults

* Disable im­por­tant se­cu­rity fea­tures be­cause other parts of their ecosys­tem haven’t caught up

* Patch your pro­ject or its build to make it work” (read: com­pile and not crash im­me­di­ately) with

com­pletely new de­pen­den­cies, com­pil­ers, tool­chains, ar­chi­tec­tures, and en­vi­ron­men­tal con­straints

You don’t know about any of the above un­til the bug re­ports start rolling in: users will re­port bugs that have al­ready been fixed, bugs that you ex­plic­itly doc­u­ment as caused by un­sup­ported con­fig­u­ra­tions, bugs that don’t make any sense what­so­ever.

You strug­gle to de­bug your users’ re­ports, since you don’t have ac­cess to the niche hard­ware, en­vi­ron­ments, or cor­po­rate sys­tems that they’re run­ning on. You slowly burn out as an un­end­ing tor­rent of al­ready fixed bugs that never seem to make it to your users. Your user base is un­happy, and you start to won­der why you’re putting all this ef­fort into pro­ject main­te­nance in the first place. Open source was sup­posed to be fun!

What’s the point of this spiel? It’s pre­cisely what hap­pened to pyca/​cryp­tog­ra­phy: no­body asked them whether it was a good idea to try to run their code on

HPPA, much less

System/390; some pack­agers just went ahead and did it, and are frus­trated that it no longer works. People just as­sumed that it would, be­cause there is still a norm that every­thing flows from C, and that any host with a halfway-func­tional C com­piler should have the en­tire open source ecosys­tem at its dis­posal.

Security-sensitive soft­ware,, par­tic­u­larly soft­ware writ­ten in un­safe lan­guages, is never se­cure in its own right.

The se­cu­rity of a pro­gram is a func­tion of its own de­sign and test­ing,

as well as the de­sign, test­ing, and ba­sic cor­rect­ness of its un­der­ly­ing plat­form: every­thing from the user­space, to the ker­nel, to the com­pil­ers them­selves. The lat­ter is an un­solved prob­lem in the very best of cases: bugs are reg­u­larly

found in even the most ma­ture com­pil­ers (Clang, GCC) and their most ma­ture back­ends (x86, ARM). Tiny changes to or dif­fer­ences in build sys­tems can have pro­found ef­fects at the bi­nary level, like

ac­ci­den­tally re­mov­ing se­cu­rity mit­i­ga­tions. Seemingly in­nocu­ous patches can make oth­er­wise safe code

ex­ploitable in the con­text of other vul­ner­a­bil­i­ties.

The prob­lem gets worse as we move to­wards niche ar­chi­tec­tures and tar­gets that are used pri­mar­ily by small hob­by­ist com­mu­ni­ties. Consider m68k

(one of the other ar­chi­tec­tures af­fected by pyca/​cryp­tog­ra­phy’s move to Rust): even GCC was con­sid­er­ing re­mov­ing sup­port due to lack of main­te­nance, un­til hob­by­ists stepped in. That is­n’t to say that any

par­tic­u­lar niche tar­get is full of bugs; only to say that it’s a greater like­li­hood for niche tar­gets in gen­eral. Nobody is reg­u­larly test­ing the moun­tain of user­space code that im­plic­itly forms an op­er­at­ing con­tract with ar­bi­trary pro­grams on these plat­forms.

Project main­tain­ers don’t want to chase down com­piler bugs on ISAs or sys­tems that they never in­tended to sup­port in the first place, and aren’t re­ceiv­ing any ac­tive sup­port feed­back about. They es­pe­cially don’t want to have vul­ner­a­bil­i­ties as­so­ci­ated with their pro­jects be­cause of buggy tool­chains or tool­ing in­er­tia when work­ing on se­cu­rity im­prove­ments.

As some­one who likes C: this is all C’s fault. Really.

Beyond lan­guage-level un­safety (plenty of peo­ple have

cov­ered that al­ready), C is or­ga­ni­za­tion­ally un­safe:

There’s no stan­dard way to write tests for C.

Functional and/​or unit tests alone would go a long way in as­sur­ing base­line cor­rect­ness on weird ar­chi­tec­tures or plat­forms, but the cog­ni­tive over­head of test­ing C and get­ting those tests run­ning en­sures that well-tested builds of C pro­grams will con­tinue to be the ex­cep­tion, rather than the rule.

There’s no stan­dard way to build C pro­grams.

Make is fine, but it’s not stan­dard. Disturbingly large swathes of crit­i­cal open source in­fra­struc­ture are com­piled us­ing a hodge­podge of Make, au­to­gen­er­ated rules from au­to­tools, and the main­tain­er’s bou­tique shell scripts. One con­se­quence of this is that C builds tend to be flex­i­ble to a fault: prospec­tive pack­agers can in­ject all sorts of be­hav­ior-mod­i­fy­ing flags that may not be at­tested di­rectly in the com­piled bi­nary or other build prod­ucts. The re­sult: it’s al­most im­pos­si­ble to prove that two sep­a­rate builds on dif­fer­ent ma­chines are the same, which means more main­tainer pain.

There’s no stan­dard way to dis­trib­ute C pro­grams.

Yes, I know that pack­age man­agers ex­ist. Yes, I know how to sta­t­i­cally link. Yes, I know how to ven­dor li­braries and dis­trib­ute self-con­tained pro­gram bundles”. None of these are or amount to a com­plete stan­dard, and each in­tro­duces ad­di­tional lo­gis­ti­cal or se­cu­rity prob­lems.

There’s no such thing as truly cross-plat­form C.

The C ab­stract ma­chine, de­spite look­ing a lot like a PDP-11, leaks the un­der­ly­ing mem­ory and or­der­ing se­man­tics of the ar­chi­tec­ture be­ing tar­geted. The re­sult is that even sea­soned C pro­gram­mers reg­u­larly rely on ar­chi­tec­ture-spe­cific as­sump­tions when writ­ing os­ten­si­bly cross-plat­form code: as­sump­tions about the atom­ic­ity of reads and writes, op­er­a­tion or­der­ing, co­her­ence and vis­i­bil­ity in self-mod­i­fy­ing code, the safety and per­for­mance of un­aligned ac­cesses, and so forth. Each of these, apart from be­ing a po­ten­tial source of un­safety, are im­pos­si­ble

to de­tect sta­t­i­cally in the gen­eral case: they are, af­ter all, per­fectly cor­rect (and fre­quently in­tended!) on the pro­gram­mer’s host ar­chi­tec­ture.

By con­tem­po­rary pro­gram­ming lan­guage stan­dards, these are con­spic­u­ous gaps in func­tion­al­ity: we’ve long since learned to bake test­ing, build­ing, dis­tri­b­u­tion, and sound ab­stract ma­chine se­man­tics into the stan­dard tool­ing for lan­guages (and lan­guage de­sign it­self). But their ab­sence is dou­bly per­ni­cious: they en­sure that C re­mains a per­pet­u­ally un­safe de­vel­op­ment ecosys­tem, and an ap­peal­ing tar­get when boot­strap­ping a new plat­form.

The pro­ject main­tainer is­n’t the only per­son hurt­ing in the sta­tus quo.

Everything stated above also leads to a bum job for the lowly pack­age main­tainer. They’re (probably) also an un­paid open source hob­by­ist, and they’re op­er­at­ing with con­straints that the up­stream is­n’t likely to im­me­di­ately un­der­stand:

* The need to link against ver­sions of de­pen­den­cies that have al­ready been pack­aged (and per­haps patched)

* ABI and ISA sub­set con­straints, stem­ming from a need to dis­trib­ute bi­na­ries that func­tion with

rel­a­tively old ver­sions of glibc or x86-64 CPUs with­out mod­ern ex­ten­sions

* Limited vis­i­bil­ity into each pro­jec­t’s test suite and how to run it, much less what to do when

it fails

They also have to deal with users who are un­sym­pa­thetic to those re­ports, and who:

* Rarely sub­mit re­ports to the pack­ager (they bug the pro­ject di­rectly in­stead!), or don’t fol­low

up on re­ports

* Demand fun­da­men­tally con­flict­ing prop­er­ties in their pack­ages: both the lat­est and great­est

fea­tures from the up­stream, and also that the pack­agers never break their de­ploy­ments

(hence the nev­erend­ing stream of untested, un­of­fi­cial patches)

All of this leads to pack­age main­tainer burnout, and an (increasingly) ad­ver­sar­ial re­la­tion­ship be­tween pro­jects and their down­stream dis­trib­u­tors. Neither of those bodes well for pro­jects, the health of crit­i­cal pack­ag­ing ecosys­tems, or (most im­por­tantly of all) the users them­selves.

I am con­ceited enough to think that my po­ten­tial so­lu­tions are worth broad­cast­ing to the world. Here they are.

Build sys­tems are a mess; I’ve talked about their com­plex­ity in a

pro­fes­sional set­ting.

A long term so­lu­tion to the prob­lem of sup­port for plat­forms not orig­i­nally con­sid­ered by pro­ject au­thors is go­ing to be two-pronged:

Builds need to be ob­serv­able and re­view­able: pro­ject main­tain­ers should be able to get the ex­act in­vo­ca­tions and de­pen­den­cies that a build was con­ducted with and per­form au­to­matic triag­ing of build in­for­ma­tion. This will re­quire en­vi­ron­ment and ecosys­tem-wide changes: ob­ject and pack­ag­ing for­mats will need to be up­dated; stan­dards for meta­data and shar­ing in­for­ma­tion from an ar­bi­trary dis­trib­u­tor to a pro­ject will need to be de­vised. Reasonable pri­vacy con­cerns about the scope of in­for­ma­tion and its avail­abil­ity will need to be ad­dressed.

Reporting needs to be bet­ter di­rected: in­di­vid­ual (minimally tech­ni­cal!) end users should be able to fig­ure out what ex­actly is fail­ing and who to phone when it falls over. That means

rig­or­ously track­ing the patches that dis­trib­u­tors ap­ply (see build ob­serv­abil­ity above) and cre­at­ing mech­a­nisms that de­liver in­for­ma­tion to the peo­ple who need it. Those same mech­a­nisms need to have some mech­a­nism for in­ter­ac­tion: there’s noth­ing worse than a flood of au­to­mated, bug re­ports with in­suf­fi­cient con­text.

Rust cer­tainly is­n’t the first ecosys­tem to pro­vide dif­fer­ent sup­port tiers, but they do a great job:

Tiers are ex­plic­itly enu­mer­ated and doc­u­mented. If you’re in a par­tic­u­lar tier bucket, you know ex­actly what you’re get­ting, what’s guar­an­teed about it, and what you’ll need to do on your own.

Official builds pro­vide tran­si­tive guar­an­tees: they can carry patches to the com­piler and other com­po­nents with­out need­ing the en­tire sys­tem to be patched. Carrying patches still is­n’t great, but it cur­rently is­n’t avoid­able.

Tiers are baked into the tool­ing it­self: you can’t use rustup on DEC ALPHA and (incorrectly) ex­pect to pull down a ma­ture, tested tool­chain. You can’t be­cause it would be a lie. This is in con­trast to the C par­a­digm, where an un(der)-tested com­piler will hap­pily be un­der-checked by a big blob of

au­to­tools

shell, pro­duc­ing a build of in­de­ter­mi­nate cor­rect­ness.

Expectations are man­aged. This point is re­ally just a cul­mi­na­tion of the first three: with ex­plicit tiers, there’s no more im­plicit guar­an­tee that a min­i­mally func­tional build tool­chain en­tails fully func­tional and sup­ported soft­ware. Users can be pointed to a sin­gle page that tells them that they’re do­ing some­thing that no­body has tried to (or cur­rently wants to) sup­port, and ex­pose op­tions to them: help out, fund the pro­ject, nag their em­ployer, &c.

I put this one last be­cause it’s flip­pant, but it’s maybe the most im­por­tant one: out­side of hob­by­ists play­ing with weird ar­chi­tec­tures for fun (and ac­cept­ing the over­whelm­ing like­li­hood that most pro­jects won’t im­me­di­ately work for them), open source groups should

not be un­con­di­tion­ally sup­port­ing the ecosys­tem for a large cor­po­ra­tion’s hard­ware

and/​or plat­forms.

Companies should be pay­ing for this di­rectly: if pyca/​cryp­tog­ra­phy ac­tu­ally broke on HPPA or IA-64, then HP or Intel or who­ever should be fork­ing over money to get it fixed or

us­ing their own horde of en­gi­neers to fix it them­selves. No free work for plat­forms that only cor­po­ra­tions are us­ing. No, this does­n’t vi­o­late the open-source ethos; noth­ing about OSS says that you have to bend over back­wards to sup­port a cor­po­rate plat­form that you did­n’t care about in the first place.

Reddit dis­cus­sion

...

Read the original on blog.yossarian.net »

10 225 shares, 13 trendiness, words and minutes reading time

jopohl/urh

The Universal Radio Hacker (URH) is a com­plete suite for wire­less pro­to­col in­ves­ti­ga­tion with na­tive sup­port for many com­mon Software Defined Radios. URH al­lows easy de­mod­u­la­tion of sig­nals com­bined with an au­to­matic de­tec­tion of mod­u­la­tion pa­ra­me­ters mak­ing it a breeze to iden­tify the bits and bytes that fly over the air. As data of­ten gets en­coded be­fore trans­mis­sion, URH of­fers cus­tomiz­able de­cod­ings to crack even so­phis­ti­cated en­cod­ings like CC1101 data whiten­ing. When it comes to pro­to­col re­verse-en­gi­neer­ing, URH is help­ful in two ways. You can ei­ther man­u­ally as­sign pro­to­col fields and mes­sage types or let URH au­to­mat­i­cally in­fer pro­to­col fields with a rule-based in­tel­li­gence. Finally, URH en­tails a fuzzing com­po­nent aimed at state­less pro­to­cols and a sim­u­la­tion en­vi­ron­ment for state­ful at­tacks.

In or­der to get started

If you like URH, please this repos­i­tory and join our Slack chan­nel. We ap­pre­ci­ate your sup­port!

We en­cour­age re­searchers work­ing with URH to cite this WOOT′18 pa­per or di­rectly use the fol­low­ing BibTeX en­try.

URH runs on Windows, Linux and ma­cOS. Click on your op­er­at­ing sys­tem be­low to view in­stal­la­tion in­struc­tions.

On Windows, URH can be in­stalled with its Installer. No fur­ther de­pen­den­cies are re­quired.

If you get an er­ror about miss­ing api-ms-win-crt-run­time-l1-1-0.dll, run Windows Update or di­rectly in­stall KB2999226.

URH is avail­able on PyPi so you can in­stall it with

# IMPORTANT: Make sure your pip is up to date

sudo python3 -m pip in­stall –upgrade pip # Update your pip in­stal­la­tion

sudo python3 -m pip in­stall urh # Install URH

This is the rec­om­mended way to in­stall URH on Linux be­cause it comes with all na­tive ex­ten­sions pre­com­piled.

In or­der to ac­cess your SDR as non-root user, in­stall the ac­cord­ing udev rules. You can find them in the wiki.

URH is in­cluded in the repos­i­to­ries of many linux dis­tri­b­u­tions such as Arch Linux, Gentoo, Fedora, open­SUSE or NixOS. There is also a pack­age for FreeBSD. If avail­able, sim­ply use your pack­age man­ager to in­stall URH.

Note: For na­tive sup­port, you must in­stall the ac­cord­ing -dev pack­age(s) of your SDR(s) such as hackrf-dev be­fore in­stalling URH.

URH is avail­able as a snap: https://​snapcraft.io/​urh

The of­fi­cial URH docker im­age is avail­able here. It has all na­tive back­ends in­cluded and ready to op­er­ate.

See wiki for a list of ex­ter­nal de­cod­ings pro­vided by our com­mu­nity! Thanks for that!

...

Read the original on github.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.