10 interesting stories served every morning and every evening.
GTA Online. Infamous for its slow loading times. Having picked up the game again to ﬁnish some of the newer heists I was shocked (/s) to discover that it still loads just as slow as the day it was released 7 years ago.
It was time. Time to get to the bottom of this.
First I wanted to check if someone had already solved this problem. Most of the results I found pointed towards anecdata about how the game is so sophisticated that it needs to load so long, stories on how the p2p network architecture is rubbish (not saying that it isn’t), some elaborate ways of loading into story mode and a solo session after that and a couple of mods that allowed skipping the startup R* logo video. Some more reading told me we could save a whopping 10-30 seconds with these combined!
Meanwhile on my PC…
I know my setup is dated but what on earth could take 6x longer to load into online mode? I couldn’t measure any difference using the story-to-online loading technique as others have found before me. Even if it did work the results would be down in the noise.
If this poll is to be trusted then the issue is widespread enough to mildly annoy more than 80% of the player base. It’s been 7 years R*!
Looking around a bit to ﬁnd who are the lucky ~20% that get sub 3 minute load times I came across a few benchmarks with high-end gaming PCs and an online mode load time of about 2 minutes. I would hack for a 2 minute load time! It does seem to be hardware-dependent but something doesn’t add up here…
How come their story mode still takes near a minute to load? (The M.2 one didn’t count the startup logos btw.) Also, loading story to online takes them only a minute more while I’m getting about ﬁve more. I know that their hardware specs are a lot better but surely not 5x better.
Armed with such powerful tools as the Task Manager I began to investigate what resources could be the bottleneck.
After taking a minute to load the common resources used for both story and online modes (which is near on par with high-end PCs) GTA decides to max out a single core on my machine for four minutes and do nothing else.
Disk usage? None! Network usage? There’s a bit, but it drops basically to zero after a few seconds (apart from loading the rotating info banners). GPU usage? Zero. Memory usage? Completely ﬂat…
What, is it mining crypto or something? I smell code. Really bad code.
While my old AMD CPU has 8 cores and it does pack a punch, it was made in the olden days. Back when AMD’s single-thread performance was way behind Intel’s. This might not explain all of the load time differences but it should explain most of it.
What’s odd is that it’s using up just the CPU. I was expecting vast amounts of disk reads loading up resources or loads of network requests trying to negotiate a session in the p2p network. But this? This is probably a bug.
Proﬁlers are a great way of ﬁnding CPU bottlenecks. There’s only one problem - most of them rely on instrumenting the source code to get a perfect picture of what’s happening in the process. And I don’t have the source code. Nor do I need microsecond-perfect readings - I have 4 minutes’ worth of a bottleneck.
Enter stack sampling: for closed source applications there’s only one option. Dump the running process’ stack and current instruction pointer’s location to build a calling tree in set intervals. Then add them up to get statistics on what’s going on. There’s only one proﬁler that I know of (might be ignorant here) that can do this on Windows. And it hasn’t been updated in over 10 years. It’s Luke Stackwalker! Someone, please give this project some love :)
Normally Luke would group the same functions together but since I don’t have debugging symbols I had to eyeball nearby addresses to guess if it’s the same place. And what do we see? Not one bottleneck but two of them!
Having borrowed my friend’s completely legitimate copy of the industry-standard disassembler (no, I really can’t afford the thing… gonna learn to ghidra one of these days) I went to take GTA apart.
That doesn’t look right at all. Most high-proﬁle games come with built-in protection against reverse engineering to keep away pirates, cheaters, and modders. Not that it has ever stopped them.
There seems to be some sort of an obfuscation/encryption at play here that has replaced most instructions with gibberish. Not to worry, we simply need to dump the game’s memory while it’s executing the part we want to look at. The instructions have to be de-obfuscated before running one way or another. I had Process Dump lying around, so I used that, but there are plenty of other tools available to do this sort of thing.
Disassembling the now-less-obfuscated dump reveals that one of the addresses has a label pulled out of somewhere! It’s strlen? Going down the call stack the next one is labeled vscan_fn and after that the labels end, tho I’m fairly conﬁdent it’s sscanf.
It’s parsing something. Parsing what? Untangling the disassembly would take forever so I decided to dump some samples from the running process using x64dbg. Some debug-stepping later it turns out it’s… JSON! They’re parsing JSON. A whopping 10 megabytes worth of JSON with some 63k item entries.
What is it? It appears to be data for a “net shop catalog” according to some references. I assume it contains a list of all the possible items and upgrades you can buy in GTA Online.
Clearing up some confusion: I beleive these are in-game money purchasable items, not directly linked with microtransactions.
But 10 megs? That’s nothing! And using sscanf may not be optimal but surely it’s not that bad? Well…
Yeah, that’s gonna take a while… To be fair I had no idea most sscanf implementations called strlen so I can’t blame the developer who wrote this. I would assume it just scanned byte by byte and could stop on a NULL.
Turns out the second offender is called right next to the ﬁrst one. They’re both even called in the same if statement as seen in this ugly decompilation:
All labels are mine, no idea what the functions/parameters are actually called.
The second problem? Right after parsing an item, it’s stored in an array (or an inlined C++ list? not sure). Each entry looks something like this:
But before it’s stored? It checks the entire array, one by one, comparing the hash of the item to see if it’s in the list or not. With ~63k entries that’s (n^2+n)/2 = (63000^2+63000)/2 = 1984531500 checks if my math is right. Most of them useless. You have unique hashes why not use a hash map.
I named it hashmap while reversing but it’s clearly not_a_hashmap. And it gets even better. The hash-array-list-thing is empty before loading the JSON. And all of the items in the JSON are unique! They don’t even need to check if it’s in the list or not! They even have a function to directly insert the items! Just use that! Srsly, WAT!?
Now that’s nice and all, but no one is going to take me seriously unless I test this so I can write a clickbait title for the post.
The plan? Write a .dll, inject it in GTA, hook some functions, ???, proﬁt.
The JSON problem is hairy, I can’t realistically replace their parser. Replacing sscanf with one that doesn’t depend on strlen would be more realistic. But there’s an even easier way.
* “cache” the start and length of it
* if it’s called again within the string’s range, return cached value
And as for the hash-array problem, it’s more straightforward - just skip the duplicate checks entirely and insert the items directly since we know the values are unique.
Well, did it work then?
Hell yes, it did! :))
Most likely, this won’t solve everyone’s load times - there might be other bottlenecks on different systems, but it’s such a gaping hole that I have no idea how R* has missed it all these years.
* It turns out GTA struggles to parse a 10MB JSON ﬁle
* The JSON parser itself is poorly built / naive and
If this somehow reaches Rockstar: the problems shouldn’t take more than a day for a single dev to solve. Please do something about it :<
You could either switch to a hashmap for the de-duplication or completely skip it on startup as a faster ﬁx. For the JSON parser - just swap out the library for a more performant one. I don’t think there’s any easier way out.
I came across Cosmopolitan on Hacker News, and I was initially confused, due to a few memories of cross-compilation nightmares: while it should be possible to compile for the same architecture regardless of operating system, wouldn’t the OS get confused by the leading bytes of the executable? I read the article explaining how it works, but most of it went over my head.
The example on the Github README used the following script for compilation:
I converted it into a simple Makeﬁle to run the compilation commands. I tried a bunch of simple C programs (basic arithmetic, reading and writing to ﬁles) on Linux+Windows (compiled on Linux), and all of them worked.
I decided to try compiling a high-level language built on C. I originally picked Python, but the Makeﬁle for Python seemed too complicated to mess with, so I then picked Lua, which looked much simpler in comparison.
I started out by blindly copy-pasting the ﬂags and includes used in the sample compilation on Github. Ah, it would have been wonderful for my laziness if it compiled out of the box. Following is a play-by-play commentary of trying to compile Lua.
First problem I ran into was header clashes: if I didn’t put -nostdlib -nostdinc while compiling each object ﬁle, -include cosmopolitan.h would clash with the system headers. But blocking the system headers meant I would have to change every #include of a system header. I created a bunch of dummy headers with the same names as those in the C stdlib and and included to those instead.
Naming clashes: some of the macros in cosmopolitan.h clashed with macro/function names in Lua: reverse and isempty. I changed the Lua source to avoid this.
A macro FIRST_RESERVED was broken because UCHAR_MAX was missing. I thought UCHAR_MAX was supposed to be in limits.h — the limits.h part of cosmopolitan.h did not have UCHAR_MAX (It had SCHAR_MAX, though.) I added in a #deﬁne stating UCHAR_MAX as __UINT8_MAX__ (ie 255).
The default Lua Makeﬁle attempts to use _setjmp/_longjmp in ldo.c when on Linux. I disabled the LUA_USE_LINUX ﬂag for compiling the object ﬁles, but this caused an issue with tmpnam in loslib.c (mkstemp is available in Cosmopolitan). I changed the Lua source to use setjmp/longjmp. A similar issue showed in lauxlib.c for sys/wait.h (which is a no-op in non-POSIX systems, as per the Lua source code), and in liolib for sys/types.h so disabled LUA_USE_POSIX over there as well.
The localeconv() function (part of locale.h) was not implemented in cosmopolitan.h, and this caused an error while compiling lobject.c (macro lua_getlocaledecpoint() depended on localeconv()). Changed the macro to just return ‘.’.
The panic function in Lua static int panic (lua_state*) clashed with that in Cosmopolitan void panic(void). Renamed the lua function to lua_panic. This triggered an error where the panic function was being called in luaL_newstate, so I changed the name there as well.
luaL_loadﬁlex caused a frame size error — I have never seen this before. A quick internet search shows that this is because a large buffer is allocated on stack when entering the function, and yes, luaL_loadﬁlex allocates a loadF object containing a char buffer of BUFSIZ. I reduced the size of the buffer to BUFSIZ - 64.
loslib.c reuiqres the setlocale() and LC_* from locale.h, which is deﬁned as an extern value in cosmopolitan.h, but that definition is somehow not enough.. screw it, I just disabled os_setlocale in loslib.c, and then it compiles.
I forgot, I shouldn’t -lm or -ldl. Ok, let’s try with all the object ﬁles instead of liblua.a:
Umm… okay, it looks like some of the functions deﬁned in the cosmopolitan header are yet to be implemented in the static library. That’s okay, I can just quickly ﬁll in the math functions, and I’ll comment out strcoll for now, just because I want to see it compile…. and it successfully compiles!! Let’s run objcopy before trying it out on a system though.
That size reduction seems a little too drastic, but let’s see if it runs on Linux:
This is pretty incredible: I just had to modify a few lines in a Makeﬁle and some C source ﬁles, and I got a Lua executable that works both on Linux and Windows (and possibly others as well). Granted, there are still some details to be ﬁlled out (ﬂoating point calculation above prints a g), but Cosmopolitan is currently at release 0.0.2, so there is a lot of time.
Hopefully this means that other languages that have source code completely in C can also be compiled once and run anywhere. Actually Portable Python next, maybe?
Smart TV was once a term reserved for high end televisions with built-in streaming capabilities. The combination of massive reductions in panel costs, decreasing costs for embedded compute, and the ready availability of content platforms from Google, Roku, and others has made the term irrelevant. Almost every TV you can buy today has smarts built-in. There have been some fantastic outcomes of that, like breaking up the traditional channel bundle and increasing access to more personalized and niche content.
There have been some serious negatives too. Decreasing prices and decreasing margins on TVs combined with long replacement cycles have driven companies to take advantage of built-in smarts to enable a new revenue source: user data and advertising. As of Q2 2020, Vizio and HiSense are the only major brands making TVs that ship without advertising enabled in their UIs. Sony, Samsung, LG, and others have ads enabled by default, most of which can’t be disabled. All of the above brands have built capability to aggregate data on what content is being viewed, and again, not all of them have the option to disable that. TVs smart enough to help you are also smart enough to harm you. Incredibly, Samsung even recommends that you run virus and malware checking on your TV regularly.
An obvious way out of this as a consumer is to buy a TV without smarts built in (a “dumb TV”) and then add your own content source that is privacy focused like Apple TV or that you have full control over like Kodi. This is something we personally looked for when we were buying a display for the conference room at Framework’s headquarters. Amazingly enough though, we found that none of the major consumer TV brands make basic “dumb” displays anymore. There are options in the commercial space like NEC’s commercial displays, but they cost substantially more than the consumer-focused alternatives.
We nearly gave in and bought a typical smart TV, and then we stumbled on Sceptre’s TV lineup. You’ll notice that they have a range of extremely similar looking sets that have minor speciﬁcation and weight differences. Our best guess is that they source LCDs from panel manufacturers that are either excess stock or fail the quality speciﬁcations set by other brands and build extremely minimal TVs around them. We haven’t noticed any quality issues on our Sceptre set, but for our use case of showing slides and spreadsheets, it wouldn’t have mattered anyway. The product was perfect for us: a dumb TV that as an added bonus reduces e-waste by using panels that would otherwise be scrapped.
It’s an interesting business model, and one that is consumer friendly, environmentally considerate, and economically sound. That is a powerful combination that we need to see across all of consumer electronics.
Earth is the only planet in the solar system with aircraft capable of sustained ﬂight. Suppose the ground-breaking Ingenuity helicopter, currently stowed aboard the similarly spectacular Mars Perseverance rover, accomplishes its planned mission. In that case, Mars will become the second planet to have a powered aircraft ﬂy through its atmosphere.
Ingenuity has sent its ﬁrst status report since landing on Mars. The signal, which arrived via the iconic Mars Reconnaissance Orbiter (MRO), reports on the state of the batteries of the helicopter as well as the operation of the base station, which, among other things, operates the critically important heaters that keep the electronics within an acceptable temperature range. Thankfully, it’s all good news for now, with the batteries and base station operating as expected.
While Ingenuity still hasn’t performed a ﬂight yet (hopefully, this becomes an outdated statement soon), the helicopter has already overcome some daunting challenges. Perhaps the most perilous portion of Ingenuity’s journey was the interplanetary trip from Earth to Mars as part of the larger Perseverance rover mission. Launched in July of 2020, Perseverance touched down at Jezero Crater on Mars on February 18th. A new high-resolution video of the spectacular sky-crane landing of Perseverance was released by NASA earlier today and is mind-blowing all on its own.
It is easy to overlook how challenging landing on Mars is. The alarming fact of the matter is that only about half of Mars missions have made it successfully! One of the main reasons for this is the density of the Martian atmosphere. Thankfully, riding strapped to the belly of the rover, Ingenuity survived the perilous descent from space.
One of the most significant obstacles for landing on Mars will continue to present problems for our heroic helicopter now that it is safely on the surface. The atmospheric pressure on the surface of Mars is only about 1% that of Earth. To put that in perspective, the summit of Mount Everest has only one-third the atmospheric pressure of sea level. While this is thought to be at (or sadly in some cases beyond) the limit of what humans can survive, it is well beyond Earthbound helicopters’ range. If you’ve ever wondered why wealthy explorer-types don’t just cheat and take a helicopter to the summit of Everest, that’s why!
Compared to Mars, the air on Everest might as well be pea soup. The ridiculously rareﬁed air on Mars makes helicopter ﬂight extraordinarily challenging. Ingenuity will spin its two counter-rotating rotors ﬁve times faster than Earthly helicopters, about forty times per second. Ingenuity is also light, only about 1.8 kilograms. The rotors are about 1.2 meters in diameter and are relatively oversized to maximize lift.
Mars does give Ingenuity a break in one area, thankfully. Mars has only about one-third of the surface gravity of the Earth. If you were to hold the aircraft while standing on the Earth, it would feel roughly as heavy as a two-liter bottle (with a couple of sips taken out for luck). On Mars, the exact same aircraft would feel like a 20 oz bottle (591 milliliters).
The helicopter does not play a critical role in the science mission of Perseverance. It is essentially a technological demonstration or proof-of-concept, and data collected from Ingenuity will be used in engineering future Martian aircraft. It is solar-powered and features electronics that have been miniaturized to keep everything light enough for ﬂight.
Ingenuity is also fully autonomous. Due to the extreme distance of Mars, the helicopter mission controllers can’t ﬂy the aircraft in realtime the way Earthly drone-pilots can use joysticks to maneuver at home. The time it takes for a signal to travel from Earth to Mars is longer than the helicopter’s entire ﬂight-time! Imagine if you were driving a car (on a closed course in your imagination only), and when you turned the wheel, the car ran out of gas before it registered your input!
If Ingenuity successfully demonstrates powered aerodynamic ﬂight on Mars, it will be a milestone unlike any that has come before. One can only imagine the impact that ﬂying Mars explorers could have on future missions. A future helicopter could be partnered with a larger rover and act as a scout, carefully surveying the terrain and helping the parent rover more efﬁciently plot a safe and scientifically interesting course. Perhaps a helicopter could pick up samples from a wide area and deliver them to a rover or stationary facility with highly sophisticated scientiﬁc instrumentation. Even a standalone helicopter mission could be conceived. There are plenty of cliffs, ice caps, volcanoes, or otherwise inaccessible parts of the Martian landscape that are likely permanently beyond the reach of ground-based rovers or even humans.
The coming weeks will be one of the most exciting periods for fans of space exploration, aviation, or anybody who is stirred by extraordinary accomplishments in engineering and, of course, the ingenuity of the brilliant scientists and engineers that built Ingenuity.
People look on as smoke rises on the Syrian side of the border in Hassa, southern Turkey, on Jan. 28, 2018, when Turkish jet ﬁghters hit People’s Protection Units (YPG) positions. Superimposed is the reply from Sheryl Sandberg in reference to blocking the YPG Facebook page.
Photo illustration by ProPublica, photo by Ozan Koze/AFP via Getty Images
ProPublica is a nonproﬁt newsroom that investigates abuses of power. Sign up to receive our biggest stories as soon as they’re published.
As Turkey launched a military offensive against Kurdish minorities in neighboring Syria in early 2018, Facebook’s top executives faced a political dilemma.
Turkey was demanding the social media giant block Facebook posts from the People’s Protection Units, a mostly Kurdish militia group the Turkish government had targeted. Should Facebook ignore the request, as it has done elsewhere, and risk losing access to tens of millions of users in Turkey? Or should it silence the group, known as the YPG, even if doing so added to the perception that the company too often bends to the wishes of authoritarian governments?
Thanks for signing up. If you like our stories, mind sharing this with a friend?
For more ways to keep up, be sure to check out the rest of our newsletters.
Fact-based, independent journalism is needed now more than ever.
It wasn’t a particularly close call for the company’s leadership, newly disclosed emails show.
“I am ﬁne with this,” wrote Sheryl Sandberg, Facebook’s No. 2 executive, in a one-sentence message to a team that reviewed the page. Three years later, YPG’s photos and updates about the Turkish military’s brutal attacks on the Kurdish minority in Syria still can’t be viewed by Facebook users inside Turkey.
The conversations, among other internal emails obtained by ProPublica, provide an unusually direct look into how tech giants like Facebook handle censorship requests made by governments that routinely limit what can be said publicly. When the Turkish government attacked the Kurds in the Afrin District of northern Syria, Turkey also arrested hundreds of its own residents for criticizing the operation.
Publicly, Facebook has underscored that it cherishes free speech: “We believe freedom of expression is a fundamental human right, and we work hard to protect and defend these values around the world,” the company wrote in a blog post last month about a new Turkish law requiring that social media ﬁrms have a legal presence in the country. “More than half of the people in Turkey rely on Facebook to stay in touch with their friends and family, to express their opinions and grow their businesses.”
But behind the scenes in 2018, amid Turkey’s military campaign, Facebook ultimately sided with the government’s demands. Deliberations, the emails show, were centered on keeping the platform operational, not on human rights. “The page caused us a few PR ﬁres in the past,” one Facebook manager warned of the YPG material.
The Turkish government’s lobbying on Afrin-related content included a call from the chairman of the BTK, Turkey’s telecommunications regulator. He reminded Facebook “to be cautious about the material being posted, especially photos of wounded people,” wrote Mark Smith, a U. K.-based policy manager, to Joel Kaplan, Facebook’s vice president of global public policy. “He also highlighted that the government may ask us to block entire pages and proﬁles if they become a focal point for sharing illegal content.” (Turkey considers the YPG a terrorist organization, although neither the U.S. nor Facebook do.)
The company’s eventual solution was to “geo-block,” or selectively ban users in a geographic area from viewing certain content, should the threats from Turkish ofﬁcials escalate. Facebook had previously avoided the practice, even though it has become increasingly popular among governments that want to hide posts from within their borders.
Facebook conﬁrmed to ProPublica that it made the decision to restrict the page in Turkey following a legal order from the Turkish government — and after it became clear that failing to do so would have led to its services in the country being completely shut down. The company said it had been blocked before in Turkey, including a half-dozen times in 2016.
The content that Turkey deemed offensive, according to internal emails, included photos on Facebook-owned Instagram of “wounded YPG ﬁghters, Turkish soldiers and possibly civilians.” At the time, the YPG slammed what it understood to be Facebook’s censorship of such material. “Silencing the voice of democracy: In light of the Afrin invasion, YPG experience severe cyberattacks.” The group has published graphic images, including photos of mortally wounded ﬁghters; “this is the way NATO ally Turkey secures its borders,” YPG wrote in one post.
Facebook spokesman Andy Stone provided a written statement in response to questions from ProPublica.
“We strive to preserve voice for the greatest number of people,” the statement said. “There are, however, times when we restrict content based on local law even if it does not violate our community standards. In this case, we made the decision based on our policies concerning government requests to restrict content and our international human rights commitments. We disclose the content we restrict in our twice-yearly transparency reports and are evaluated by independent experts on our international human rights commitments every two years.”
The Turkish embassy in Washington said it contends the YPG is the “Syrian offshoot” of the Kurdistan Workers’ Party, or PKK, which the U. S. government considers to be a terrorist organization.
Facebook has considered the YPG page politically sensitive since at least 2015, emails show, when ofﬁcials discovered the page was inaccurately marked as veriﬁed with a blue check mark. In turn, “that created negative coverage on Turkish pro-government media,” one executive wrote. When Facebook removed the check mark, it in turn “created negative coverage [in] English language media including on Hufﬁngton Post.”
In 2018, the review team, which included global policy chief Monika Bickert, laid out the consequences of a ban. The company could set a bad example for future cases and take ﬂak for its decision. “Geo-blocking the YPG is not without risk — activists outside of Turkey will likely notice our actions, and our decision may draw unwanted attention to our overall geo-blocking policy,” said one email in late January.
But this time, the team members said, the parties were embroiled in an armed conﬂict and Facebook ofﬁcials worried their platform could be shut down entirely in Turkey. “We are in favor of geo-blocking the YPG content,” they wrote, “if the prospects of a full-service blockage are great.” They prepared a “reactive” press statement: “We received a valid court order from the authorities in Turkey requiring us to restrict access to certain content. Following careful review, we have complied with the order,” it said.
In a nine-page ruling by Ankara’s 2nd Criminal Judgeship of Peace, government ofﬁcials listed YPG’s Facebook page among several hundred social media URLs they considered problematic. The court wrote that the sites should be blocked to “protect the right to life or security of life and property, ensure national security, protect public order, prevent crimes, or protect public health,” according to a copy of the order obtained by ProPublica.
Kaplan, in a Jan. 26, 2018, email to Sandberg and Facebook CEO Mark Zuckerberg, conﬁrmed that the company had received a Turkish government order demanding that the page be censored, although it wasn’t immediately clear if ofﬁcials were referring to the Ankara court ruling. Kaplan advised the company to “immediately geo-block the page” should Turkey threaten to block all access to Facebook.
Sandberg, in a reply to Kaplan, Zuckerberg and others, agreed. (She had been at the World Economic Forum in Davos, Switzerland, touting Facebook’s role in assisting victims of natural disasters.)
“Facebook can’t bow to authoritarians to suppress political dissidents and then claim to be just ‘following legal orders,’” said Sen. Ron Wyden, an Oregon Democrat who’s a prominent Facebook critic. “American companies need to stand up for universal human rights, not just hunt for bigger profits. Mark Zuckerberg has called for big changes to U. S. laws protecting free speech at the same time he protected far-right slime merchants in the U.S. and censored dissidents in Turkey. His priority has been protecting the powerful and Facebook’s bottom line, even if it means marginalized groups pay the price.”
In a statement to ProPublica, the YPG said censorship by Facebook and other social media platforms “is on an extreme level.”
“YPG has actively been using social media platforms like Facebook, Twitter, YouTube, Instagram and others since its foundation,” the group said. “YPG uses social media to promote its struggle against jihadists and other extremists who attacked and are attacking Syrian Kurdistan and northern Syria. Those platform[s] have a crucial role in building a public presence and easily reaching communities across the world. However, we have faced many challenges on social media during these years.”
Cutting off revenue from Turkey could harm Facebook ﬁnancially, regulatory ﬁlings suggest. Facebook includes revenue from Turkey and Russia in the ﬁgure it gives for Europe overall and the company reported a 34% increase for the continent in annual revenue per user, according to its 2019 annual report to the U. S. Securities and Exchange Commission.
Yaman Akdeniz, a founder of the Turkish Freedom of Expression Association, said the YPG block was “not an easy case because Turkey sees the YPG as a terror organization and wants their accounts to be blocked from Turkey. But it just conﬁrms that Facebook doesn’t want to challenge these requests, and it was prepared to act.”
“Facebook has a transparency problem,” he said.
In fact, Facebook doesn’t reveal to users that the YPG page is explicitly banned. When ProPublica tried to access YPG’s Facebook page using a Turkish VPN — to simulate browsing the internet from inside the country — a notice read: “The link may be broken, or the page may have been removed.” The page is still available on Facebook to people who view the site through U. S. internet providers.
How the Police Bank Millions Through Their Union Contracts
For its part, Facebook reported about 15,300 government requests worldwide for content restrictions during the ﬁrst half of 2018. Roughly 1,600 came from Turkey during that period, company data shows, accounting for about 10% of requests globally. In a brief post, Facebook said it restricted access to 1,106 items in response to requests from Turkey’s telecom regulator, the courts and other agencies, “which covers a range of offenses including personal rights violations, personal privacy, defamation of [ﬁrst Turkish president Mustafa Kemal] Ataturk, and laws on the unauthorized sale of regulated goods.”
Katitza Rodriguez, policy director for global privacy at the Electronic Frontier Foundation, said the Turkish government has also managed to force Facebook and other platforms into appointing legal representatives in the country. If tech companies don’t comply, she said, Turkish taxpayers would be prevented from placing ads and making payments to Facebook. Because Facebook is a member of the Global Network Initiative, Rodriguez said, it has pledged to uphold the group’s human rights principles.
“Companies have an obligation under international human rights law to respect human rights,” she said.
Do you have access to information about Facebook that should be public? Email [email protected]. Here’s how to send tips and documents to ProPublica securely.
For many people, the year 2020 will go down as a moment in time of hardship in their lives but for me, the year 2019 was dramatically harder as it was the realization that a long-term relationship wasn’t going to work out and that everyone, including my children, would be homeless within 14 days. Oomph.
I packed all of my artefacts and ofﬁce equipment into a u-haul and left everything else, including the family car to my now ex-wife. To cut a long story short the cost of housing in Sydney, Australia and sole-income was a leading factor in the dissolution of the relationship.
I’m doing excellent, now, but back then the entire experience left me shattered, soul destroyed and burnt out beyond belief. Luckily an el-rando, now friend, saw that I was in need and reached out:
I just want to say, thank-you. I needed that.
Dear reader, if you ever ﬁnd yourself in crisis or a situation similar to mine please do not hesitate to contact me if you need a shoulder or advice. You deserve to live a life that makes you happy and not miserable.
People often ask me what got you interested in #vanlife, now you know. It wasn’t an interest as-per-say but more of a necessity and a key ingredient to ensuring my children would grow up with a father in their lives.
I ﬁrst heard about the concept of #vanlife back in 2015 after reading this article about a Google employee who lived in a box-truck in the company’s parking lot:
The events that followed the Stradbroke camping trip put me on a path towards thinking about okay, what-next and how will I get there? Fast-forward a couple of months later I found myself on my own month-long camping trip down in Tasmania where I learned some important life lessons, bushcraft skills and met some interesting characters, some of whom were living #vanlife.
I was amazed at the quality of life these people had, how they were able to make ends meet and their resourcefulness. They also knew how to have a freaking good time…
Over the course of a month, a small group of people camping on crown land grew to over 500 people. Love was free and it was custom to hug everyone. Meanwhile, in a parallel universe, this was happening…
With the new-found knowledge and experiences, I ﬂew back to sunny Queensland, Australia and started my research. After watching hundreds of hours of videos on YouTube I found this video, which to this day, represents my north star:
Over the months that followed through 2020, my father and mother helped make the dream a reality. I never wanted to live in Sydney, Australia ever again, yet, much to my dismay, my children would remain living in a city that makes no sense. Putting together a van was a major step in getting my life back on track and ensuring that my children would know their father.
It’s now 2021, I’m doing better than I was in the relationship and my children are having more quality time with their father than they have ever had. Over the last 12 months during various stages of the van kit-out, my kids and I have been going away on overnight, weekend and school holiday adventures where they get to experience the sights and sounds of Australia.
On the relationship front, I’m pleased to share that there has been minimal conﬂict and day-by-day we each learn to love and respect each other as the parents of our children. A large part of that is due to some early advice I received from a man who had also recently divorced:
To that person, thank you for sharing your perspectives and advice on how to ensure that, unlike you see in the movies, co-parenting does not need to be adversarial.
So, in summary - everything has been going well but there was still one thing left to resolve:
✔️ conﬂict free co-parenting.
✔️ children having quality time with their father.
🚧 meaningful work
Approx 21% to 35% of our time awake is spent at work and every two years that goes by at a company you don’t like (or unemployed) is wasting 2% of your working life. Time is valuable and there are no guarantees that tomorrow will come thus it is exceedinly important for happiness that you ﬁnd work that is meaningful and that treats you well.
Work keeps us busy. It gives us structure, it deﬁnes us as functioning, contributing, worthwhile citizens. It makes us part of the team, a community of fellow workers — even if we do our work remotely in isolation.
Pause for a moment and think about this…
After what I had been through in 2019 I spent a considerable amount of time in 2020 thinking about the above question.
The clock rolled over to 2021 and fortunately I have answers to the above question, a kick-ass van but what good is a van if you can’t go on adventures and work remotely from it indefinitely?
So anyway, I’ve got some news…
The founders of Gitpod recently approached me about joining them and it was a no-brainer. I have joined Gitpod. 🎉
The company is remote-ﬁrst (✔️), embraces asyncrony (✔️), ﬁlled with brilliant people (✔️), the product is open-source (✔️) and built on GitHub (✔️) using the light weight Visual Studio code process (✔️).
Gitpod has been a meaningful and key piece of software in my toolkit over the last couple years because Gitpod enables me to develop on any device from anywhere. I can hop between pull-requests with a single click and there’s no waiting because the pull-request has already been pre-compiled.
Gitpod has enabled me to standardize development environments between people and, like docker, move to ephemeral instances so that I never experience “but it works on my machine” ever again.
I could talk for hours on end on how meaningful the work Gitpod is doing and how it makes it easier for open-source maintainers to onboard new contributors to their project but it’s better that you just experience it for yourself.
So, that’s the news. Things are doing really well. I want to say thank-you for those whom have supported me over the last couple years whilst I ﬁgured this all out.
This chapter is only just beginning. I’m blogging more and tweeting less so if you want to learn about sweet places to visit in Australia, working remotely from a van or how I get internet whilst camping out in a remote forest, like, subscribe (free) and enter your email address to be notiﬁed when future blog posts ship.
Explaining how ﬁghting games use delay-based and rollback netcode
I would like to thank krazhier and Keits for taking hours out of their busy schedules to discuss technical aspects of netcode with me, and Sajam for taking time to answer interview questions and being supportive throughout the writing process. I would also like to especially thank MagicMoste for making all the wonderful videos you see in this article. All their help was offered for free and I am thankful for their friendship.
This article has been cross-posted on Ars Technica.
You may also enjoy watching a video feature on the topics in this article.
Welcome back to Fightin’ Words! It’s been a while since we last discussed how the most famous ﬁghting game bugs have impacted the community’s favorite games. Today’s topic is a bit more technical, but it’s an equally important factor in how our favorite modern games are played — we’re going to be doing a deep dive into netcode.
At its core, netcode is simply a method for two or more computers, each trying to play the same game, to talk to each other over the internet. While local play always ensures that all player inputs arrive and are processed at the same time, networks are constantly unstable in ways the game cannot control or predict. Information sent to your opponent may be delayed, arrive out of order, or become lost entirely depending on dozens of factors, including the physical distance to your opponent, if you’re on a WiFi connection, and whether your roommate is watching Netﬂix. Online play in games is nothing new, but ﬁghting games have their own set of unique challenges. They tend to involve direct connections to other players, unlike many other popular game genres, and low, consistent latency is extremely important because muscle memory and reactions are at the core of virtually every ﬁghting game. As a result, two prominent strategies have emerged for playing ﬁghting games online: delay-based netcode and rollback netcode.
There’s been a renewed passion in the ﬁghting game community that rollback is the best choice, and ﬁghting game developers who choose to use delay-based netcode are preventing the growth of the genre. While people have been passionate about this topic for many years, frustrations continue to rise as new, otherwise excellent games repeatedly have bad online experiences.
There are relatively few easy-to-follow explanations for what exactly rollback netcode is, how it works, and why it is so good at hiding the effects of bad connections (though there are some). Because I feel this topic is extremely important for the future health of the ﬁghting game community, I want to help squash some misconceptions about netcode and explain both netcode strategies thoroughly so everyone can be informed as they discuss. If you stick around to the end, I’ll even interview some industry experts and community leaders on the topic! Before we dig into the details, though, let’s get one thing straight.
Both companies and players should care about good netcode because playing online is no longer the future — it’s the present. While most other video game genres have been this way for a decade or longer, ﬁghting game developers seem to be resistant to embracing online play, perhaps because of the genre’s roots in ofﬂine settings such as arcades and tournaments. Playing ofﬂine is great, and it will always have considerable value in ﬁghting games, but it’s simply a reality that a large percentage of the player base will never play ofﬂine. For many ﬁghting game fans, playing online is the game, and a bad online experience prevents them from getting better, playing or recommending the game to their friends, and ultimately causes them to simply go do something else. Even if you think you have a good connection, or live in an area of the world with robust internet infrastructure, good netcode is still mandatory. Plus, lost or delayed information happens regularly even on the best networks, and poor netcode can actively hamper matches no matter how smooth the conditions may be. Good netcode also has the beneﬁt of connecting regions across greater distances, effectively uniting the global player base as much as possible.
Bad netcode can ruin matches. This match, played online between two Japanese players, impacted who gets to attend the Capcom Pro Tour ﬁnals. (source)
What about those who never play online because they much prefer playing ofﬂine with their friends? The healthy ecosystem that good netcode creates around a game beneﬁts everyone. There will be more active players, more chances to consume content for your favorite game — from tech videos to spectating online tournaments to expanding the strategy of lesser-used characters — and more excitement surrounding your game in the FGC. Despite Killer Instinct’s pedigree as an excellent game, there’s no doubt that its superb rollback netcode has played a huge part in the sustained growth of its community.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in
to your account
This post contains my own opinions, not the opinions of my employer or any open source groups I belong or contribute to.
It’s also been rewritten 2½ times, and (I think) reads confusingly in places. But I promised myself that I’d get it out of the door instead of continuing to sit on it, so here we go.
There’s been a decent amount of debate in the open source community about support
recently, originating primarily from
pyca/cryptography’s decision to use Rust for some ASN.1 parsing routines.
To summarize the situation: building the latest pyca/cryptography release from scratch now requires a Rust toolchain. The only currently Rust toolchain is built on LLVM, which supports a (relatively) limited
set of architectures. Rust further whittles this set down into support tiers, with some targets not receiving automated testing (tier 2) or ofﬁcial builds (tier 3).
By contrast, upstream GCC supports a somewhat larger
set of architectures. But C, cancer that it is, ﬁnds its way onto every architecture with or without GCC (or LLVM’s) help, and thereby bootstraps everything
Program packagers and distributors (frequently separate from project maintainers themselves) are very used to C’s universal presence. They’re so used to it that they’ve built generic mechanisms for putting entire distributions onto new architectures with only a single assumption: the presence of a serviceable C compiler.
This is the heart of the conﬂict: Rust (and many other modern, safe languages) use LLVM for its relative simplicity, but LLVM does not support either native or cross-compilation to many less popular (read: niche) architectures. Package managers are increasingly ﬁnding that one of their oldest assumptions can be easily violated, and they’re not happy about that.
But here’s the problem: it’s a bad assumption. The fact that it’s the default represents an unmitigated security, reliability, and reproducibility disaster.
Imagine, for a moment, that you’re a maintainer of a popular project.
Everything has gone right for you: you have happy users, an active development base, and maybe even corporate sponsors. You’ve also got a CI/CD pipeline that produces canonical releases of your project on tested architectures; you treat any issues with uses of those releases as a bug in the project itself, since you’ve taken responsibility for packaging it.
Because your project is popular, others also distribute it: Linux distributions, third-party package managers, and corporations seeking to deploy their own controlled builds. These others have slightly different needs and setups and, to varying degrees, will:
* Build your project with slightly (or completely) different versions of dependencies
* Build your project with slightly (or completely) different optimization ﬂags and other potentially
* Distribute your project with insecure or outright broken defaults
* Disable important security features because other parts of their ecosystem haven’t caught up
* Patch your project or its build to make it “work” (read: compile and not crash immediately) with
completely new dependencies, compilers, toolchains, architectures, and environmental constraints
You don’t know about any of the above until the bug reports start rolling in: users will report bugs that have already been ﬁxed, bugs that you explicitly document as caused by unsupported conﬁgurations, bugs that don’t make any sense whatsoever.
You struggle to debug your users’ reports, since you don’t have access to the niche hardware, environments, or corporate systems that they’re running on. You slowly burn out as an unending torrent of already ﬁxed bugs that never seem to make it to your users. Your user base is unhappy, and you start to wonder why you’re putting all this effort into project maintenance in the ﬁrst place. Open source was supposed to be fun!
What’s the point of this spiel? It’s precisely what happened to pyca/cryptography: nobody asked them whether it was a good idea to try to run their code on
HPPA, much less
System/390; some packagers just went ahead and did it, and are frustrated that it no longer works. People just assumed that it would, because there is still a norm that everything ﬂows from C, and that any host with a halfway-functional C compiler should have the entire open source ecosystem at its disposal.
Security-sensitive software,, particularly software written in unsafe languages, is never secure in its own right.
The security of a program is a function of its own design and testing,
as well as the design, testing, and basic correctness of its underlying platform: everything from the userspace, to the kernel, to the compilers themselves. The latter is an unsolved problem in the very best of cases: bugs are regularly
found in even the most mature compilers (Clang, GCC) and their most mature backends (x86, ARM). Tiny changes to or differences in build systems can have profound effects at the binary level, like
accidentally removing security mitigations. Seemingly innocuous patches can make otherwise safe code
exploitable in the context of other vulnerabilities.
The problem gets worse as we move towards niche architectures and targets that are used primarily by small hobbyist communities. Consider m68k
(one of the other architectures affected by pyca/cryptography’s move to Rust): even GCC was considering removing support due to lack of maintenance, until hobbyists stepped in. That isn’t to say that any
particular niche target is full of bugs; only to say that it’s a greater likelihood for niche targets in general. Nobody is regularly testing the mountain of userspace code that implicitly forms an operating contract with arbitrary programs on these platforms.
Project maintainers don’t want to chase down compiler bugs on ISAs or systems that they never intended to support in the ﬁrst place, and aren’t receiving any active support feedback about. They especially don’t want to have vulnerabilities associated with their projects because of buggy toolchains or tooling inertia when working on security improvements.
As someone who likes C: this is all C’s fault. Really.
Beyond language-level unsafety (plenty of people have
covered that already), C is organizationally unsafe:
There’s no standard way to write tests for C.
Functional and/or unit tests alone would go a long way in assuring baseline correctness on weird architectures or platforms, but the cognitive overhead of testing C and getting those tests running ensures that well-tested builds of C programs will continue to be the exception, rather than the rule.
There’s no standard way to build C programs.
Make is ﬁne, but it’s not standard. Disturbingly large swathes of critical open source infrastructure are compiled using a hodgepodge of Make, autogenerated rules from autotools, and the maintainer’s boutique shell scripts. One consequence of this is that C builds tend to be ﬂexible to a fault: prospective packagers can inject all sorts of behavior-modifying ﬂags that may not be attested directly in the compiled binary or other build products. The result: it’s almost impossible to prove that two separate builds on different machines are the same, which means more maintainer pain.
There’s no standard way to distribute C programs.
Yes, I know that package managers exist. Yes, I know how to statically link. Yes, I know how to vendor libraries and distribute self-contained program “bundles”. None of these are or amount to a complete standard, and each introduces additional logistical or security problems.
There’s no such thing as truly cross-platform C.
The C abstract machine, despite looking a lot like a PDP-11, leaks the underlying memory and ordering semantics of the architecture being targeted. The result is that even seasoned C programmers regularly rely on architecture-speciﬁc assumptions when writing ostensibly cross-platform code: assumptions about the atomicity of reads and writes, operation ordering, coherence and visibility in self-modifying code, the safety and performance of unaligned accesses, and so forth. Each of these, apart from being a potential source of unsafety, are impossible
to detect statically in the general case: they are, after all, perfectly correct (and frequently intended!) on the programmer’s host architecture.
By contemporary programming language standards, these are conspicuous gaps in functionality: we’ve long since learned to bake testing, building, distribution, and sound abstract machine semantics into the standard tooling for languages (and language design itself). But their absence is doubly pernicious: they ensure that C remains a perpetually unsafe development ecosystem, and an appealing target when bootstrapping a new platform.
The project maintainer isn’t the only person hurting in the status quo.
Everything stated above also leads to a bum job for the lowly package maintainer. They’re (probably) also an unpaid open source hobbyist, and they’re operating with constraints that the upstream isn’t likely to immediately understand:
* The need to link against versions of dependencies that have already been packaged (and perhaps patched)
* ABI and ISA subset constraints, stemming from a need to distribute binaries that function with
relatively old versions of glibc or x86-64 CPUs without modern extensions
* Limited visibility into each project’s test suite and how to run it, much less what to do when
They also have to deal with users who are unsympathetic to those reports, and who:
* Rarely submit reports to the packager (they bug the project directly instead!), or don’t follow
up on reports
* Demand fundamentally conﬂicting properties in their packages: both the latest and greatest
features from the upstream, and also that the packagers never break their deployments
(hence the neverending stream of untested, unofﬁcial patches)
All of this leads to package maintainer burnout, and an (increasingly) adversarial relationship between projects and their downstream distributors. Neither of those bodes well for projects, the health of critical packaging ecosystems, or (most importantly of all) the users themselves.
I am conceited enough to think that my potential solutions are worth broadcasting to the world. Here they are.
Build systems are a mess; I’ve talked about their complexity in a
A long term solution to the problem of support for platforms not originally considered by project authors is going to be two-pronged:
Builds need to be observable and reviewable: project maintainers should be able to get the exact invocations and dependencies that a build was conducted with and perform automatic triaging of build information. This will require environment and ecosystem-wide changes: object and packaging formats will need to be updated; standards for metadata and sharing information from an arbitrary distributor to a project will need to be devised. Reasonable privacy concerns about the scope of information and its availability will need to be addressed.
Reporting needs to be better directed: individual (minimally technical!) end users should be able to ﬁgure out what exactly is failing and who to phone when it falls over. That means
rigorously tracking the patches that distributors apply (see build observability above) and creating mechanisms that deliver information to the people who need it. Those same mechanisms need to have some mechanism for interaction: there’s nothing worse than a ﬂood of automated, bug reports with insufﬁcient context.
Rust certainly isn’t the ﬁrst ecosystem to provide different support tiers, but they do a great job:
Tiers are explicitly enumerated and documented. If you’re in a particular tier bucket, you know exactly what you’re getting, what’s guaranteed about it, and what you’ll need to do on your own.
Ofﬁcial builds provide transitive guarantees: they can carry patches to the compiler and other components without needing the entire system to be patched. Carrying patches still isn’t great, but it currently isn’t avoidable.
Tiers are baked into the tooling itself: you can’t use rustup on DEC ALPHA and (incorrectly) expect to pull down a mature, tested toolchain. You can’t because it would be a lie. This is in contrast to the C paradigm, where an un(der)-tested compiler will happily be under-checked by a big blob of
shell, producing a build of indeterminate correctness.
Expectations are managed. This point is really just a culmination of the ﬁrst three: with explicit tiers, there’s no more implicit guarantee that a minimally functional build toolchain entails fully functional and supported software. Users can be pointed to a single page that tells them that they’re doing something that nobody has tried to (or currently wants to) support, and expose options to them: help out, fund the project, nag their employer, &c.
I put this one last because it’s ﬂippant, but it’s maybe the most important one: outside of hobbyists playing with weird architectures for fun (and accepting the overwhelming likelihood that most projects won’t immediately work for them), open source groups should
not be unconditionally supporting the ecosystem for a large corporation’s hardware
Companies should be paying for this directly: if pyca/cryptography actually broke on HPPA or IA-64, then HP or Intel or whoever should be forking over money to get it ﬁxed or
using their own horde of engineers to ﬁx it themselves. No free work for platforms that only corporations are using. No, this doesn’t violate the open-source ethos; nothing about OSS says that you have to bend over backwards to support a corporate platform that you didn’t care about in the ﬁrst place.
The Universal Radio Hacker (URH) is a complete suite for wireless protocol investigation with native support for many common Software Deﬁned Radios. URH allows easy demodulation of signals combined with an automatic detection of modulation parameters making it a breeze to identify the bits and bytes that ﬂy over the air. As data often gets encoded before transmission, URH offers customizable decodings to crack even sophisticated encodings like CC1101 data whitening. When it comes to protocol reverse-engineering, URH is helpful in two ways. You can either manually assign protocol ﬁelds and message types or let URH automatically infer protocol ﬁelds with a rule-based intelligence. Finally, URH entails a fuzzing component aimed at stateless protocols and a simulation environment for stateful attacks.
In order to get started
If you like URH, please this repository and join our Slack channel. We appreciate your support!
We encourage researchers working with URH to cite this WOOT′18 paper or directly use the following BibTeX entry.
URH runs on Windows, Linux and macOS. Click on your operating system below to view installation instructions.
On Windows, URH can be installed with its Installer. No further dependencies are required.
If you get an error about missing api-ms-win-crt-runtime-l1-1-0.dll, run Windows Update or directly install KB2999226.
URH is available on PyPi so you can install it with
# IMPORTANT: Make sure your pip is up to date
sudo python3 -m pip install –upgrade pip # Update your pip installation
sudo python3 -m pip install urh # Install URH
This is the recommended way to install URH on Linux because it comes with all native extensions precompiled.
In order to access your SDR as non-root user, install the according udev rules. You can ﬁnd them in the wiki.
URH is included in the repositories of many linux distributions such as Arch Linux, Gentoo, Fedora, openSUSE or NixOS. There is also a package for FreeBSD. If available, simply use your package manager to install URH.
Note: For native support, you must install the according -dev package(s) of your SDR(s) such as hackrf-dev before installing URH.
URH is available as a snap: https://snapcraft.io/urh
The ofﬁcial URH docker image is available here. It has all native backends included and ready to operate.
See wiki for a list of external decodings provided by our community! Thanks for that!