10 interesting stories served every morning and every evening.
This month marks the publication of “The Actual Star,” a new, wildly ambitious, wildly successful novel by Monica Byrne, spanning 3,000 years of history from the ancient Maya to a distant future.
The tale braids three moments: the collapse of the Mayan royal dynasty in 1012, 2012, when a Maya girl raised in Minnesota returns to her father’s birthplace in Belize; and 3012, when the 8m humans who survived the climate emergency inhabit a high-tech, nomadic civilization.
All three are a mix of mysticism, blood, carnality, and a deep love of place and the Earth. They all tell the tale of schisms, when different views of how to live a right life in harmony with nature are contested with rage and violence, and they all connect to the Maya faith.
Byrne’s deep historical research into the Maya of 1012 and her imagining of a radical new wandering society of 3012 are both profoundly, gorgeously foreign, reminiscent of Ada Palmer’s books (one of the few sf writers who can conjure a truly different set of social norms).
The Actual Star challenges the linear, efﬁciency-driven “rationality” that has brought our civilization to the brink of collapse and asks us to imagine utterly new ways of understanding the world, even as it probes the ﬂaws inherent in any system of knowing and being.
And for all that, it’s an sf novel. It’s got a plot (three, actually), that’s intense and gripping, as mystical symbols like ancient haunted caves and godlike jaguars aren’t just symbols — they’re physical things that characters we care about have to cope with.
It’s a book about sacriﬁce, about the long view and deep time, about the universality of human experience and the particularity of any given moment. It’s a ﬁrst-rate work of sf, and a hopeful and fearful book about the climate. It’s just great.
“Breaking In,” is my latest column for Locus Magazine; it’s both the story of how I broke into science ﬁction, and an explanation of why there’s so little to learn from that story.
When I was trying to sell my ﬁrst stories, I obsessively sought career advice and memoirs from established writers. I sat in on countless sf convention panels in which bestselling writers explained how they’d butter up long-dead editors to sell to long-defunct publications.
None of them ever mentioned that as interesting as this stuff might be as an historical artifact, it had zero applicability to the market I was trying to break into.
Not only did these writers enter a fundamentally different — and long-extinct publishing world than the current one, but their relationship to the current market was fundamentally different from my own.
Editors solicited work from them, not the other way around. When they wrote something on spec, they could directly contact editors with whom they’d had long and fruitful professional associations — bypassing the who “slush reader” apparatus.
I don’t know if these established writers failed to mention that none of this applied to the would-be writers in the audience because they thought it was obvious or because it never occurred to them, but either way, it didn’t do me a lick of good.
What worked for me? Well, that’s the point, isn’t it? What worked for me won’t work for you. Not only was my path into the ﬁeld pretty idiosyncratic — any generally applicable principle to be derived from it has been obsolete for decades.
But some things don’t change. I beneﬁted immensely from the kindness — sometimes protracted, sometimes momentary — of writers who spoke to youth groups, served as writers-in-residence, guest-lectured to my summer D&D camp.
Above all, I beneﬁted from Judith Merril, a towering writer, critic and editor who went into voluntary exile in Toronto after the Chicago police riots of 1968, and opened the Spaced Out Library, now the Merril Collection, the largest public sf reference library in the world.
Judy didn’t just serve as writer-in-residence, reading my manuscripts when I took the subway downtown to give them to her. She also did writer-in-the-schools programs, founding serious writers’ workshops that endured for decades.
My high-school workshop was one such; I kept attending it for years after I graduated (I wasn’t alone). Judy also steered the writers she critiqued into peer groups, like the still-thriving Cecil Street Irregulars, which I joined in the early 1990s.
Other writers were likewise kind and generous with their time. Tanya Huff worked behind the counter at Bakka bookstore; she sold me the ﬁrst sf novel I ever bought with my own money (H Beam Piper’s Little Fuzzy).
Tanya was immensely patient with me, and even read manuscripts I shyly brought down to the store, giving me encouraging — but unﬂinching — feedback. When Tayna quit to write full time, I got her job in the store.
Ed Llewellyn and Ed Greenwood were guest speakers at the D&D summer camp I attended. Both were incredibly encouraging when I approached them after their talks to tell them I wanted to write.
Parke Godwin was guest of honor at the ﬁrst con I ever volunteered at; when I brought him his coffee, he patiently listened to me as I told him I wanted to write and took me seriously, telling me about the importance of good habits.
These writers didn’t have any career advice for me per se, but I wouldn’t have had a career without them — without them taking me seriously, even at a very young age. I try to pay them forward, by encouraging the young writers in my own path:
As to commercial advice, there’s very little I can offer, I’m afraid. I like Heinlein’s advice (“1. Write. 2. Finish. 3. Submit. 4. Revise to editorial spec.“).
I have a general method (“Find publications that feature work like yours, research their submission process, send your story to the highest-paying ones ﬁrst”).
As for speciﬁc market advice, that’s something that you should get from peers, not the people who came before you. When I was starting out, other would-be writers and I obsessively shared notes on new markets, editorial tastes, and other nuts-and-bolts.
Writers who are at the same place in their development as you have advice that is far more likely to be applicable to your situation. What’s more, they’re also the kinds of writers you should be seeking out to join in a critiquing group — your peers.
The reality is that “breaking in” is a grind. It took me a decade from my ﬁrst submission to my ﬁrst professional publication; 19 years before my ﬁrst novel hit the shelves.
Perseverance is the greatest predictor of success here, and support from your peers is the best source of strength and resiliency over that long road.
The Framework laptop is the ﬁrst laptop to ever score a 10/10 from Iﬁxit for repairability. But it’s no thick-as-a-brick throwback the size of a 2005 Thinkpad — it’s approximately the same dimensions as a MacBook.
Mine was delivered at the end of Aug. I got it set up by the ﬁrst of September and have been using it ever since. Yesterday, I put my 2019 Thinkpad on my pile of “laptops to refurbish and donate.” I’ve bought a new Thinkpad almost every year since 2006. I think that’s over.
I switched to Thinkpads as part of my switch to Ubuntu, a ﬂavor of GNU/Linux that was designed to be easy to use for laypeople. My Unix systems administration days were more than a decade behind me when I made the switch.
I loved Thinkpads…at ﬁrst. Not only were they rugged as hell, but they had an incredible warranty. For about $150/year, IBM guaranteed that a service tech would come to your home or hotel room, anywhere in the world, within 24 hours, and ﬁx your machine.
Prior to my Thinkpad switch, I’d been a Powerbook user and a prisoner to Applecare. I made a practice of buying two Powerbooks at a time and keeping them in synch so that when one inevitably broke down, I could leave it for weeks or months with Apple and use the other one.
I was a heavy traveller then (I was EFF’s European Director, on the road 27+ days/month — I even stopped plugging in my fridge because it was costing me $10/month to keep my ice-cubes frozen), and a dead laptop meant that I was beached, unable to do any work.
I loved Macos, but the Powerbooks were really shitty machines, with incredibly poor build quality and a captive repair chain that was run in a way that made it clear that its managers understood that its customers had no alternative.
Switching to Ubuntu was disorienting…at ﬁrst. It was a lot like the time we renovated our kitchen and moved everything around, and I spent a month reaching for a cutlery drawer that wasn’t there. But then, one day, I just acclimated and never noticed it again.
So it was with OSes. If you’re noticing your OS, something’s wrong. With Ubuntu, I got a GUI that was similar enough to Macos that I could retrain myself, and when things went wrong, I had access to an (admittedly esoteric but) incredibly powerful suite of command-line tools.
This turned out to be an ideal combination. When everything worked, the UX was effectively identical to my Macos days. When things went wrong with my hardware, I never had more than 24h downtime — even when some of my RAM went bad while I was in Mumbai!
And when software got wonky — something that happened with the same approximate frequency as I experienced with Macos and when I was a CIO administering large heterogeneous networks of Mac/Win systems — the recovery tools were far superior.
But it wasn’t to last. IBM sold its Thinkpad division to Lenovo and everything started to go to shit. The actual systems acquired layers and layers of proprietary crap — secretive Nvidia graphics cards, strange BIOS rubbish — that made installing Ubuntu progressively harder.
The hardware got worse, too. When I lived in the UK, my Thinkpads always shipped with a UK keyboard. I’d order a US keyboard for
By 2015, Thinkpads required a full disassembly with multiple specialized tools and tape-removal to ﬁx the keyboards. Also, the keyboards got worse — I had to have three keyboard replacements in 2015, and I couldn’t perform any of them,
Things really came to a head in 2019. That was the year I bought and returned two Thinkpads because I couldn’t stabilize Ubuntu on them. The third, a giant, heavy Carbon X1, took three months and several bug-ﬁxes by Lenovo’s driver team before it worked.
Still, I was ready to buy another Thinkpad by last spring. What else was I going to buy? I wanted something maintainable, and I loved the hardware mouse-buttons and the Trackpoint. But Lenovo was estimating 4-5 months to fulﬁll orders, so I closed the window and bailed.
Then I saw Iﬁxit’s teardown of a Framework laptop. They described a computer whose hardware was fully user-maintainable/upgradeable. The system opens with six “captive” screws (they stay in the case) and then every component can be easily accessed.
There’s no tape. There’s no glue. Every part has a QR code that you can shoot with your phone to go to a service manual that has simple-to-follow instructions for installing, removing and replacing it. Every part is labeled in English, too!
The screen is replaceable. The keyboard is replaceable. The touchpad is replaceable. Removing the battery and replacing it takes less than ﬁve minutes. The computer actually ships with a screwdriver.
All this, without sacriﬁcing size or power — it’s so similar to a Macbook that a friend who came over for dinner (and who knows about my feelings about proprietary Apple hardware) expressed shock that I’d switched to a Macbook!
The computer performs as well or better than my 2019 Thinkpad, but it doesn’t need the Thinkpad’s proprietary, ~$200 dock — a cheap, $60 device lets me easily connect it to all my peripherals and my desktop monitor, over USB-C. No drivers or conﬁguration needed!
Installing Ubuntu was (nearly) painless. I had been loathe to upgrade the version of Ubuntu I was running on the Thinkpad, lest I kick off another cascade of brutal, tier-2 bug-hunting in the system’s proprietary drivers. As a result, I ran the 2018 “Long Term Support” OS.
When I installed Ubuntu on the Framework, I used the latest version — the Framework ships with a very up-to-date wiﬁ card that the older version of Ubuntu couldn’t recognize. Then I simply dumped all my ﬁles over from a backup drive.
Jumping three years’ worth of OSes in one go, moving over my preferences and conﬁguration ﬁles from a Thinkpad, did not work perfectly. A single trackpad conﬁg ﬁle didn’t play nice and I had to hunt it down and delete it, and then everything else was literally ﬂawless.
The hardware is also nearly ﬂawless, though I do have a few minor caveats. The computer ships disassembled: you have to open it and install your RAM, SSD, and wiﬁ card. The ﬁrst two were easy — the third was a major pain in the ass.
The standard wiﬁ card antenna cables are absurdly ﬁddly, and the Framework documentation wasn’t clear enough to see me through. However, when I tweeted to the company about it, they responded swiftly with a video that demystiﬁed it.
Another caveat. I really miss my Thinkpad Trackpoint (the little nub in the middle of the keyboard) and the three hardware mouse buttons on the trackpad. I’m ﬁnding it really hard to reliably hit the right region on my trackpad to get the left-, center- and middle-buttons.
I’ve drawn little hints on in sharpie, and I’m working with Canonical, who make Ubuntu, on remapping the button areas. But judging from the Framework forums, I’m not the only Thinkpad expat who’d like to swap the keyboard and trackpad.
But the good news is that if anyone wants to make that keyboard and trackpad, I can swap them in myself, in minutes, with one tool.
That tool — a small screwdriver — is also sufﬁcient to upgrade the CPU or replace the screen, speakers, webcam, etc.
These are all just ﬁne. The webcam and mic both come with hardware off-switches (not just covers, but actual electrical isolation switches that take them ofﬂine until you switch them back). The speakers are loud enough.
The screen is sharper than the one on my Thinkpad (though it’s glossier and a little harder to read in direct sunlight).
I haven’t even mentioned the ports! The Framework has four expansion ports that ﬁt square dongles for HDMI, Ethernet, various USBs, etc.
The Framework site lets you buy as much or as little computer as you want. If you have your own RAM or SSD, you just uncheck those boxes. If you don’t bother with Windows (like me), you save $139-200.
Having used this system for nearly a month, I can unequivocally recommend it! However! Most of my use of this computer was from my sofa, while I was recovering from hip-replacement surgery. I haven’t road-tested it at all.
But I’ll note here that if it turned out that a component failed due to my usual rough handling, I could replace it with a standard part in a matter of minutes, myself, in whatever hotel room I happened to be perching in, using a single screwdriver.
It’s been a long time since I owned a computer that was more interesting with its case off than on, but the Framework is a marvel of thoughtful, sustainable, user-centric engineering.
It puts the lie to every claim that portability and reliability can’t coexist with long-lasting, durable, upgradeable, sustainable hardware.
I started buying a new laptop every year as a reward to myself for quitting smoking.
The environmental consequences of that system weren’t lost on me, even given my very good track-record of re-homing my old computers with people who needed them.
But with the Framework, I’m ready to change that policy.
From now on, I can easily see myself upgrading the CPU or the screen on an annual basis, or packing in more RAM. But the laptop? Apart from the actual chassis falling apart, there’s no reason I’d replace it for the whole foreseeable future.
This is a beautiful, functional, sustainable, thoughtful and even luxurious (Framework offers a 2TB SDD, while Lenovo has been stuck at 1TB drives for years and years) computer.
Based on a month’s use, I am prepared to declare myself a Framework loyalist, and to retire my last Thinkpad…forever.
#15yrsago Bruce Sterling story: How kids’ lives will be ruined by Internet control http://www.churchofvirus.org/bbs/index.php?board=6;action=display;threadid=36318
#10yrsago OnStar vows to track your movements forever, even if you cancel the service https://www.wired.com/2011/09/onstar-tracks-you/
#5yrsago RCMP: Former Canadian mint worker smuggled out $180K by hiding it up his butt https://ottawacitizen.com/news/local-news/egan-170k-in-mint-gold-allegedly-smuggled-in-body-cavity-judge-hears
#5yrsago What yesterday’s hilariously awful testimony by Wells Fargo’s CEO portends for his future https://www.nakedcapitalism.com/2016/09/wells-fargo-ceos-teﬂon-don-act-backﬁres-at-senate-hearing-i-take-full-responsibility-means-anything-but.html
#5yrsago Free trade lowers prices — but not on things poor people need (and it pushes up housing prices) http://www.ddorn.net/papers/Autor-Dorn-Hanson-ChinaShock.pdf
#5yrsago Lickspittle consigliere: how the super-rich abuse their wealth managers as loyalty tests https://www.theguardian.com/business/2016/sep/21/how-to-hide-it-inside-secret-world-of-wealth-managers
#1yrago Fincen (they fucking knew all along) https://pluralistic.net/2020/09/21/too-big-to-jail/#ﬁncen
A nonﬁction book about excessive buyer-power in the arts, co-written with Rebecca Giblin, “The Shakedown.” FINAL EDITS
A post-GND utopian novel, “The Lost Cause.” FINISHED
* From Wayback to Way Forward: The Internet Archive turns 25, Oct 21
* “Attack Surface”: The third Little Brother novel, a standalone technothriller for adults. The Washington Post called it “a political cyberthriller, vigorous, bold and savvy about the limits of revolution and resistance.” Order signed, personalized copies from Dark Delicacies https://www.darkdel.com/store/p1840/Available_Now%3A_Attack_Surface.html
“How to Destroy Surveillance Capitalism”: an anti-monopoly pamphlet analyzing the true harms of surveillance capitalism and proposing a solution. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59 (print edition: https://bookshop.org/books/how-to-destroy-surveillance-capitalism/9781736205907) (signed copies: https://www.darkdel.com/store/p2024/Available_Now%3A__How_to_Destroy_Surveillance_Capitalism.html)
“Little Brother/Homeland”: A reissue omnibus edition with a new introduction by Edward Snowden: https://us.macmillan.com/books/9781250774583; personalized/signed copies here: https://www.darkdel.com/store/p1750/July%3A__Little_Brother_%26_Homeland.html
“Poesy the Monster Slayer” a picture book about monsters, bedtime, gender, and kicking ass. Order here: https://us.macmillan.com/books/9781626723627. Get a personalized, signed copy here: https://www.darkdel.com/store/p1562/_Poesy_the_Monster_Slayer.html.
This work licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
“When life gives you SARS, you make sarsaparilla” -Joey “Accordion Guy” DeVilla
Crystalline quartz melts between 1670 °C (tridymite) and 1713 °C (cristobalite), and because quartz is pervasive and easily identiﬁed, melted grains serve as an important temperature indicator. At TeH, we observed that unmelted potsherds displayed no melted quartz grains, indicating exposure to low temperatures. On the other hand, most quartz grains on the surfaces of pottery, mudbricks, and roofing clay exhibited some degree of melting, and unmelted quartz grains were rare. Nearly all quartz grains found on broken, unmelted surfaces of potsherds were also unmelted. On melted pottery and mudbricks, melted quartz has an estimated density of 1 grain per 5 mm2.
Melted quartz grains at TeH exhibit a wide range of morphologies. Some show evidence of partial melting that only melted grain edges and not the rest of the grain (Figs. 22, 23). Others displayed nearly complete melting with diffusion into the melted Ca–Al–Si matrix of pottery or mudbrick (Fig. 22). Melted quartz grains commonly exhibit vesiculation caused by outgassing (Figs. 22, 23), suggesting that those grains rose above quartz’s melting point of ~ 1713 °C.
An SEM–EDS elemental map of one melted grain showed that the quartz had begun to dissociate into elemental Si (Fig. 22b). Another grain (Fig. 23c–e) displayed ﬂow marks consistent with exposure to temperatures above 1713 °C where the viscosity of quartz falls low enough for it to ﬂow easily. Another SEM–EDS analysis conﬁrmed that one agglutinated mass of material is 100 wt.% SiO (Fig. 23f, g), suggesting that this polycrystalline quartz grain shattered, melted, and partially fused again.
Moore et al.17 reported that during heating experiments, many quartz grains 50-µm-wide remained visually unaltered up to ~ 1700 °C. By 1850 °C, all quartz grains fully melted. These experiments establish a particle-size dependency and conﬁrm conﬁrmed the melting point for > 50-µm-wide TeH quartz grains between ~ 1700–1850 °C. Melted > 50-µm-wide quartz grains on the surfaces of melted pottery and mudbrick from the TeH destruction layer indicate exposure to these unusually high temperatures > 1700 °C.
Previously, Thy et al.70 proposed that glass at Abu Hureyra did not form during a cosmic impact, but rather, formed in biomass slag that resulted from thatched hut ﬁres. However, Thy et al. did not determine whether or not high-temperature grains existed in the biomass slag. To test that claim, Moore et al.17 analyzed biomass slag from Africa and found only low-temperature melted grains with melting points of ~ 1200 °C, consistent with a temperature range for biomass slag of 1155–1290 °C, as reported by Thy et al.71. Upon testing the purported impact glass from Abu Hureyra, Moore et al.17 discovered high-temperature mineral grains that melt in the range of 1713° to > 2000 °C, as are also found in TeH glass. These test results suggest that the melted glass from Abu Hureyra must have been exposed to higher temperatures than those associated with ﬁres in thatched huts. Because of the presence of high-temperature minerals at TeH, we conclude that, as at Abu Hureyra, the meltglass could not have formed simply by burning thatched huts or wood-roofed, mudbrick buildings.
The presence of melted spherulitic objects (“spherules”) has commonly been used to help identify and investigate high-temperature airburst/impact events in the sedimentary record. Although these objects are referred to here as “spherules,” they display a wide range of other impact-related morphologies that include rounded, sub-rounded, ovate, oblate, elongated, teardrop, dumbbell, and/or broken forms17,72,73,74,75,76,77,78,79,80,81,82. Optical microscopy and SEM–EDS are commonly used to identify and analyze spherules and the processes by which they are formed. Care is needed to conclusively distinguish high-temperature spherules produced by cosmic impacts from other superﬁcially similar forms. Other such objects that frequently occur in sediments include anthropogenic spherules (typically from modern coal-ﬁred power plants), authigenic framboids (Supporting Information, Fig. S7), rounded detrital magnetite, and volcanic spherules.
Spherules in TeH sediment were investigated from stratigraphic sequences that include the MB II destruction layer at four locations: palace, temple, ring road, and wadi (Fig. 24). For the palace (Field UA, Square 7GG), the sequence spanned 28 cm with 5 contiguous samples of sediment ranging from 3-cm thick for the MB II destruction layer to 13-cm thick for some outlying samples. In the palace, 310 spherules/kg (Fig. 24d) were observed in the destruction layer with none found in samples above and below this layer. For the temple (Field LS, Square 42J), 5 continuous samples spanned 43 cm and ranged in thickness from 6 to 16 cm; the MB II layer contained ~ 2345 Fe- and Si-rich spherules/kg with 782/kg in the sample immediately below and none at other levels (Fig. 24c). Six contiguous samples from the ring road (Field LA, Square 28 M) spanned 30 cm with all 5 cm thick; the MB II destruction layer at this location contains 2150 spherules/kg with none detected in younger or older samples (Fig. 24b). Five discontinuous samples from the wadi spanned 170 cm, ranging from 10-cm thick for the destruction layer up to 20-cm thick for other samples; the MB II destruction layer at this location contained 2780 spherules/kg with none in samples from other levels (Fig. 24a, Supporting Information, Table S3). Notably, when melted mudbrick from the ring road was being mounted for SEM analysis, numerous loose spherules were observed within vesicles of the sample, conﬁrming a close association between the spherules and meltglass. At all four locations, the peaks in high-temperature spherule abundances occur in the MB II destruction layer dating to ~ 1650 BCE.
SEM images of spherules are shown in Figs. 25, 26, 27 and 28, and compositions are listed in Supporting Information, Table S4. The average spherule diameter was 40.5 µm with a range of 7 to 72 µm. The dominant minerals were Fe oxides averaging 40.2 wt.%, with a range of up to 84.1 wt.%; elemental Fe with a range of up to 80.3 wt.%; SiO averaging 20.9 wt.%, ranging from 1.0 to 45.2 wt.%; AlO averaging 7.8 wt.% with a range of up to 15.6 wt.%; and TiO averaging 7.1 wt.% with a range of up to 53.1 wt.%. Fourteen spherules had compositions > 48 wt.% of oxidized Fe, elemental Fe, and TiO; ﬁve spherules contained 75 wt.% Fe with no Ti. Eight of 23 spherules analyzed contained detectable levels of Ti at up to 53.1 wt.%.
Two unusual spherules from the palace contain anomalously high percentages of rare-earth elements (REEs) at > 37 wt.% of combined lanthanum (La), and cerium (Ce) (Fig. 26), as determined by preliminary measurements using SEM–EDS. Minor oxides account for the rest of the spherules’ bulk composition (Table S1).
One 54-µm-wide sectioned spherule contains titanium sulﬁde (TiS) with a melting point of ~ 1780° C. TiS, known as wassonite, was ﬁrst identiﬁed in meteorites (Fig. 27) and has been reported in impact-related material17,81,83. However, TiS sometimes occurs as an exsolution product forming ﬁne networks in magnetite and ilmenite and can be of terrestrial origin.
One unusual piece of 167-µm-wide Ca–Al–Si meltglass contains nearly two dozen iron oxide spherules on its surface (Fig. 28). The meltglass contains a completely melted quartz grain as part of the matrix (Fig. 28b). Most of the spherules appear to have been ﬂattened or crushed by collision with the meltglass while they were still partially molten (Fig. 28c).
Melted materials from non-impact-related combustion have been reported in multiple studies. Consequently, we investigated whether Ca-, Fe-, and Si-rich spherules and meltglass (mudbrick, pottery, plaster, and roofing clay) may have formed normally, rather than from a cosmic impact event. For example, (i) glassy spherules and meltglass are known to form when carbon-rich biomass smolders below ground at ~ 1000° to 1300 °C, such as in midden mounds71. They also form in buried peat deposits84, underground coal seams85, burned haystacks86, and in large bonﬁres, such as at the Native American site at Cahokia, Illinois, in the USA87. (ii) Also, ancient fortiﬁcations (hillforts) in Scotland and Sweden, dating from ~ 1000 BCE to 1400 AD, have artiﬁcially vitriﬁed walls that melted at temperatures of ~ 850° to 1000 °C88. (iii) Partially vitriﬁed pottery and meltglass derived from the melting of wattle and daub (thatch and clay) with estimated temperatures of ~ 1000 °C have been reported in burned houses of the Trypillia culture in Ukraine89,90. (iv) Vitriﬁed mudbricks and pottery that melted at 91; in the northern Jordan Valley at an Early-Bronze-Age site called Tell Abu al-Kharaz92; and at Early-Bronze-Age Tell Chuera in Syria93. All these sites describe melting temperatures ranging from ~ 850° to 1300 °C.
In another example of meltglass, vitriﬁed bricks at Tell Leilan in Syria, dating to ~ 2850 to 2200 BCE, are estimated by Weiss94 to have melted at ~ 1200 °C, and he attributed high-calcium spherules to low-temperature combustion of thatch roofing materials95. However, thatch has low calcium content, leading Courty96 to propose that this material formed from melted lime-based plaster during an airburst/impact at Tell Leilan ~ 550 years before the destruction of TeH. Courty96 reported aluminosilicate spherules with unusual high-temperature elemental nickel (melting point: ~ 1455 °C); complex vesicular glass particles that contain terrestrially rare unoxidized nickel inclusions; and both single and multiple calcite spherules (melting point = 1500 °C, as measured by experiments in this contribution).
For the melted materials, there is a deﬁnitive difference: high-temperature minerals are embedded in meltglass at TeH but none are present at these other sites (except for Tell Leilan). To explore this difference, Moore et al.17 investigated biomass glass from midden mounds in Africa and found no high-temperature minerals. For this contribution, we used SEM–EDS to examine aluminosilicate meltglass from an underground peat ﬁre in South Carolina, USA; meltglass in coal-ﬁred ﬂy ash from New Jersey, USA; and mining slag from a copper mine in Arizona, USA. All these meltglass examples display unmelted quartz and contain no other high-temperature melted grains, consistent with low-temperature melting at
At the sites with non-impact meltglass, estimated temperatures were consistently less than 1300 °C, too low to melt magnetite into Fe-rich spherules, e.g., with compositions of > 97% wt.% FeO, as are found at TeH. Nor can these low temperatures produce meltglass and spherules embedded with melted zircon (melting point = 1687 °C), chromite (2190 °C), quartz (1713 °C), platinum (1768 °C), and iridium (2466 °C). Moore et al.17 conﬁrmed that the melting of these high-temperature minerals requires minimum temperatures of ~ 1500° to 2500 °C.
This evidence demonstrates that although the matrix of the spherules and meltglass at TeH likely experienced incipient melting at temperatures lower than ~ 1300 °C, this value represents only the minimum temperature of exposure, because the high-temperature minerals embedded in them do not melt at such low temperatures. Instead, the spherules and meltglass at TeH must have reached temperatures greater than ~ 1300 °C, most likely involving brief exposure to ambient temperatures of ~ 2500 °C, the melting point of iridium. These temperatures far exceed those characteristic of city ﬁres and other types of biomass burning. In summary, all of this evidence is consistent with very high temperatures known during cosmic impacts but inconsistent with other known natural causes.
In sediments of the destruction layer, we observed amber-to-off-white-colored spherules (Fig. 29) at high concentrations of ~ 240,000/kg in the palace, ~ 420/kg in the temple, ~ 60/kg on the ring road, and ~ 910/kg in the wadi (Supporting information, Table S2). In all four proﬁles, the spherules peak in the destruction layer with few to none above or below. Peak abundances of calcium carbonate spherules are closely associated with peak abundances of plaster fragments, which are the same color. By far the most spherules (~ 250× more) occurred in the destruction layer of the palace, where excavations showed that nearly every room and ceiling was surfaced with off-white lime-based plaster. Excavators uncovered high-quality lime plaster fragments still adhering to mudbricks inside the MB II palace complex, and in one palace room, we uncovered fragments of melted plaster (Fig. 29e). In contrast, lime plaster was very rarely used in buildings on the lower tall, including those near the temple.
To explore a potential connection between plaster and spherules, we performed SEM–EDS on samples of the palace plaster. Comparison of SEM–EDS analyses shows that the plaster composition has a > 96% similarity to the spherule composition: CaCO = 71.4 wt.% in plaster versus 68.7 wt.% in the spherules; elemental C = 23.6 versus 26.3 wt.%; SiO = 2.4 versus 1.8 wt.%; MgO = 1.7 versus 2.0 wt.%; and SO = 0.94 versus 1.2 wt.%. The high carbon percentage and low sulfur content indicate that the plaster was made from calcium carbonate and not gypsum (CaSO·2HO). SEM imaging revealed that the plaster contains small plant parts, commonly used in plaster as a binder, and is likely the source of the high abundance of elemental C in the plaster. Inspection showed no evidence of microfossils, such as coccoliths, brachiopods, and foraminifera. The morphology of the spherules indicates that they are not authigenic or biological in origin.
One of the earliest known uses of CaCO-based plaster was in ~ 6750 BCE at Ayn Ghazal, ~ 35 km from TeH in modern-day Amman, Jordan97. At that site, multi-purpose lime plaster was used to make statues and ﬁgurines and to coat the interior walls of buildings. Because the production of lime-based plaster occurred at least 3000 years before TeH was destroyed, the inhabitants of TeH undoubtedly were familiar with the process. Typically, lime powder was produced in ancient times by stacking wood/combustibles interspersed with limestone rocks and then setting the stack on ﬁre. Temperatures of ~ 800–1100 °C were required to transform the rocks into crumbly chalk, which was then mixed with water to make hydrated lime and plastered onto mudbrick walls97.
At TeH, fragments of CaCO-based plaster are intermixed in covarying abundances with CaCO-based spherules with both compositions matching to within 96%. This similarity suggests that the carbonate spherules are derived from the plaster. We infer that the high-temperature blast wave from the impact event stripped some plaster from the interior walls of the palace and melted some into spherules. However, it is difﬁcult to directly melt CaCO, which gives off CO at high temperatures and decomposes into lime powder. We investigated this cycle in a heating experiment with an oxygen/propylene torch and found that we could decompose the plaster at ~ 1500 °C, the upper limit of the heating test, and begin incipient melting of the plaster. The heated plaster produced emergent droplets at that temperature but did not transform into free spherules (Supporting Information, Text S2).
Similar spherules have been reported from Meteor Crater, where spherules up to ~ 200 μm in diameter are composed entirely of CaCO formed from a cosmic impact into limestone98,99. One of several possible hypotheses for TeH is that during the impact event, the limestone plaster converted to CaO with an equilibrium melting point of 2572 °C. However, it is highly likely that airborne contaminants, such as sodium and water vapor, reacted with the CaO and significantly lowered the melting point, allowing spherule formation at ≥ 1500 °C.
The proposed chemical sequence of events of plaster formation and the later impact are as follows:
Quicklime was mixed with water to make a wet plaster:
The plaster hardened and slowly absorbed CO to revert to CaCO:
The high-temperature impact event melted some plaster into spherules:
According to the previous investigations17,72,81,82, Fe-rich spherules such as those found at TeH typically melt at > 1538 °C, the melting point of iron (Table 1). Because of the presence of magnetite (FeO) in the REE spherule, its melting point is inferred to be > 1590 °C (Table 1). The Si-rich spherules are similar in composition to TeH sediment and mudbrick, and thus, we propose that they were derived from the melting of these materials at > 1250 °C. The carbonate-rich spherules likely formed at > 1500 °C.
Several studies describe a mechanism by which spherules could form during a low-altitude cosmic airburst100,101. When a bolide enters Earth’s atmosphere, it is subjected to immense aerodynamic drag and ablation, causing most of the object to fragment into a high-temperature ﬁreball, after which its remaining mass is converted into a high-temperature vapor jet that continues at hypervelocity down to the Earth’s surface. Depending on the altitude of the bolide’s disruption, this jet is capable of excavating unconsolidated surﬁcial sediments, melting them, and ejecting the molten material into the air as Si- and Fe-rich spherules and meltglass. This melted material typically contains a very low percentage (102.
To more accurately determine the maximum temperatures of the destruction layer, we used SEM–EDS to comprehensively investigate melted minerals on the outer surfaces of melted pottery and mudbricks. We searched for and analyzed zircon (melting point: ~ 1687 °C), chromite (~ 2190 °C), and quartz (~ 1713 °C)17.
Melted zircons in pottery and mudbricks were observed (Fig. 30) at an estimated density of 1 grain per 20 mm2. On highly melted surfaces, nearly all zircons showed some degree of melting. In contrast, nearly all zircons found on broken interior surfaces were unmelted (Fig. 30d), except those within ~ 1 mm of melted surfaces. This implies that the temperature of the surrounding atmosphere was higher than the internal temperatures of the melting objects. Unmelted potsherds displayed only unmelted minerals.
The melted zircons in TeH materials exhibit a wide range of morphologies. Most showed evidence of sufﬁcient melting to alter or destroy the original distinctive, euhedral shape of the grains. Also, the grains were often decorated with vesicles that were associated with fractures (Fig. 30a, c).
Stoichiometric zircon contains 67.2 wt.% and 32.8 wt.% ZrO and SiO respectively, but in several TeH samples, we observed a reduction in the SiO concentration due to a loss of volatile SiO from the dissociation of SiO. This alteration has been found to occur at 1676 °C, slightly below zircon’s melting point of 1687 °C103. This zircon dissociation leads to varying ZrO:SiO ratios and to the formation of distinctive granular textures of pure ZrO, also known as baddeleyite104 (Figs. 30, 31, 32). With increasing time at temperature, zircon will eventually convert partially or completely to ZrO. Nearly all zircons observed on the surfaces of melted materials were either melted or showed some conversion to baddeleyite. We observed one zircon grain (Fig. 32d–e) displaying granular ZrO associated with three phases that span a wide range of SiO concentrations, likely formed at temperatures above 1687 °C. This extreme temperature and competing loss of SiO over an inferred duration of only several seconds led to complex microstructures, where grains melted, outgassed, and diffused into the surrounding matrix.
Zircon grains have a theoretical, equilibrium melting point of ~ 1687 °C. Under laboratory heating17, zircon grains showed no detectable alteration in shape at ~ 1300 °C but displayed incipient melting of grain edges and dissociation to baddeleyite beginning at ~ 1400 °C with increasing dissociation to 1500 °C17. Most zircon grains 120 µm were still recognizable but displayed considerable melting17. These experiments establish a lower melting range for TeH zircon grains of ~ 1400° to 1500 °C.
Patterson105 showed that zircon dissociation becomes favorable above 1538 °C and particles between 1 and 100 µm in size melted and dissociated when passing through a plasma, forming spherules with various amounts of SiO glass containing ZrO crystallites ranging in size from 5 nm to 1 µm. The majority of zircon crystals were monoclinic, but tetragonal ZrO was observed for the smaller crystallite sizes. Residence times were in the order of 100 ms, and the speciﬁc ZrO to SiO ratio within each spherule depended on the particle’s time at temperature106.
Bohor et al.104 presented images of impact-shocked zircons from the K-Pg impact event at 66 Ma that are morphologically indistinguishable from those at TeH. Decorated zircon grains are uncommon in nature but commonly associated with cosmic impact events, as evidenced by two partially melted zircons from the known airburst/impact at Dakhleh Oasis, Egypt (Fig. 30e). The presence of bubbles indicates that temperatures reached at least 1676 °C, where the zircon began to dissociate and outgas. Similar dissociated zircon grains also have been found in tektite glass and distal fallback ejecta (deposited from hot vapor clouds). Granular baddeleyite-zircon has been found in the ~ 150-km-wide K-Pg impact crater107 and the 28-km-wide Mistatin Lake crater in Canada107. The dissociation of zircon requires high temperatures of ~ 1676 °C104, implying that TeH was exposed to similar extreme conditions.
Examples of melted chromite, another mineral that melts at high temperatures, were also observed. Thermally-altered chromite grains were observed in melted pottery, melted mudbricks, and melted roofing clay from the palace. Their estimated density was 1 grain per 100 mm2, making them rarer than melted zircon grains. The morphologies of chromite grains range from thermally altered (Fig. 33a) to fully melted (Fig. 33b, d). One chromite grain from the palace displays unusual octahedral cleavage or shock-induced planar fractures (Fig. 33b). The typical chemical composition for chromite is 25.0 wt.% Fe, 28.6 wt.% O, and 46.5 wt.% Cr, although the Cr content can vary from low values to ~ 68 wt.%. SEM images reveal that, as chromite grains melted, some Cr-rich molten material migrated into and mixed with the host melt, causing an increase in Cr and Fe, and corresponding depletion of Si. The ratio of Cr to Fe in chromite affects its equilibrium melting point, which varies from ~ 1590 °C for a negligible amount of Cr up to ~ 2265 °C for ~ 46.5 wt.% Cr as in chromite or chromian magnetite ((Fe)CrO), placing the melting point of TeH chromite at close to 2265 °C.
Chromite grains theoretically melt at ~ 2190 °C. Moore et al.17 reported the results of heating experiments in which chromite grains in bulk sediment showed almost no thermal alteration up to ~ 1500 °C (Supporting Information, Fig. S8). At temperatures of ~ 1600 °C and ~ 1700 °C, the shapes of chromite grains were intact but exhibited limited melting of grain edges. These results establish a range of ~ 1600° to 1700 °C for melting chromite grains.
Because chromite typically does not exhibit cleavage, the grain exhibiting this feature is highly unusual. Its origin is unclear but there are several possibilities. The cleavage may have resulted from exsolution while cooling in the source magma. Alternately, the lamellae may have resulted from mechanical shock during a cosmic impact, under the same conditions that produced the shocked quartz, as reported by Chen et al.108 for meteorites shocked at pressures of ~ 12 GPa. Or they may have been formed by thermal shock, i.e., rapid thermal loading followed by rapid quenching. This latter suggestion is supported by the observation that the outside glass coating on the potsherd does not exhibit any quench crystals, implying that the cooling progressed very rapidly from liquid state to solid state (glass). This is rare in terrestrial events except for some varieties of obsidian, but common in melted material produced by atomic detonations (trinitite), lightning strikes (fulgurites), and cosmic airburst/impacts (meltglass)81. More investigations are needed to determine the origin of the potentially shocked chromite.
Using SEM–EDS, we investigated abundances and potential origins (terrestrial versus extraterrestrial) of platinum-group elements (PGEs) embedded in TeH meltglass, in addition to Ni, Au, and Ag. Samples studied include melted pottery (n = 3); melted mudbrick (n = 6); melted roofing clay (n = 1), and melted lime-based building plaster (n = 1). On the surfaces of all four types of meltglass, we observed melted metal-rich nuggets and irregularly shaped metallic splatter, some with high concentrations of PGEs (ruthenium (Ru), rhodium (Rh), palladium (Pd), osmium (Os), iridium (Ir), and platinum (Pt)) and some nuggets enriched in silver (Ag), gold (Au), chromium (Cr), copper (Cu), and nickel (Ni) with no PGEs (Figs. 34, 35). Importantly, these metal-rich nuggets were observed only on the top surfaces of meltglass and not inside vesicles or on broken interior surfaces.
Using SEM–EDS, we identiﬁed variable concentrations and assemblages of PGEs. The metallic particles appear to have melted at high temperatures based on the minimum melting points of the elements: iridium at 2466 °C; platinum = 1768 °C; and ruthenium = 2334 °C, indicating a temperature range of between approximately 1768° and 2466 °C. Our investigations also identiﬁed two PGE groups, one with nuggets in which Pt dominates Fe and the other with metallic splatter in which Fe dominates Pt.
We conducted 21 measurements on Pt-dominant TeH nuggets on meltglass (Fig. 34a–c). The nuggets average ~ 5 µm in length (range 1–12 µm) with an estimated concentration of 1 nugget per 10 mm2. For these nuggets, Fe concentrations average 1.0 wt.%, Ir = 6.0 wt.%, and Pt = 44.9 wt.% (Supporting Information, Tables S6, S7). The presence of PGEs was conﬁrmed by two SEM–EDS instruments that veriﬁed the accurate identiﬁcation of PGEs through analyses of several blanks that showed no PGE content. Some concentrations are low ( Pt or Pt > Fe were found to be consistent between the two instruments.
To determine the source of TeH nuggets and splatter, we constructed ternary diagrams. Terrestrial PGE nuggets are commonly found in ore bodies that when eroded, can become concentrated in riverine placer deposits, including those of the Jordan River ﬂoodplain. To compare Fe–Ir–Pt relationships among the TeH nuggets, we compiled data from nearby placer deposits in Greece109, Turkey110,111, and Iraq112, along with distant placers in Russia113,114,115, Canada116, and Alaska, USA117,118. The compilation of 109 Pt-dominant placer nuggets indicates that the average Fe concentration is 8.2 wt.%, Ir = 2.9 wt.%, and Pt = 80.3 wt.%. For the Ir-dominant placer nuggets (n = 104), Fe = 0.4 wt.%, Ir = 47.8 wt.%, and Pt = 5.3 wt.% (Supporting Information, Tables S6, S7). The ternary diagrams reveal that the values for Pt-dominant TeH nuggets overlap with Pt-dominant terrestrial placer nuggets but the Fe-dominant splatter is dissimilar (Fig. 36a).
We made 8 measurements on TeH Fe-dominant PGE splatter (Fig. 34d–f). The metal-rich areas average ~ 318 µm in length (range 20–825 µm) with an estimated concentration of 1 PGE-rich bleb per mm2, 100× more common than the TeH nuggets. Average concentrations are Fe = 17.5 wt.%, Ir = 4.7 wt.%, and Pt = 1.5 wt.%.
We explored a potential extraterrestrial origin by constructing ternary diagrams for comparison of TeH Fe-dominant splatter with known meteorites and comets (Fig. 36b, c). We compiled data for 164 nuggets extracted from carbonaceous chondritic meteorites (e.g., Allende, Murchison, Leoville, and Adelaide)119,120,121,122, seaﬂoor cosmic spherules123,124, iron meteorites122,125, Comet Wild 2126, and cometary dust particles126. For average weight percentages, see Supporting Information, Tables S6, S7. The Fe-dominant TeH splatter (Fig. 36b) closely matches nuggets from carbonaceous chondrites and cosmic spherules but is a weak match for most iron meteorites (Fig. 36c). In addition, the TeH nuggets are similar to four cometary particles, two of which were collected during the Stardust ﬂyby mission of Comet Wild 2 in 2004126. For average weight percentages, see Supporting Information, Tables S6, S7.
To further explore an extraterrestrial connection for TeH Fe-dominant splatter, we compiled wt.% data for TeH PGEs (Rh, Ru, Pd, Os, Ir, and Pt) and normalized them to CI chondrites using values from Anders and Grevasse127. We compared those values to CI-normalized nuggets in carbonaceous chondrites, including CV-type chondrites (e.g., Allende) and CM types (e.g., Murchison)119,120,122,128,129,130,131, seaﬂoor cosmic spherules124, micrometeorites123, and iron meteorites122,125. These results are shown in Fig. 36d.
The TeH Fe-dominant splatter closely matches all types of extraterrestrial material with a similar pattern among all data sets: Pd has the lowest normalized values and Os and/or Ir have the highest, closely followed by Pt. The TeH splatter was also compared to the CI-normalized wt.% of bulk meteoritic material from CV- and CM-type chondrites (Fig. 36d). The composition of TeH splatter shows poor correlation with bulk chondritic materials, although the splatter is an excellent geochemical match with the PGE nuggets inside them. In summary, the CI normalization of PGEs suggests an extraterrestrial origin for the Fe-dominant TeH splatter, just as the ternary diagrams also suggest an extraterrestrial source. The correspondence of these two independent results suggests that the quantiﬁcation of PGEs is sufﬁciently accurate in this study.
Another unusually abundant element, Mo, is also associated with Fe-dominant splatter but not with Pt-dominant nuggets. Mo averages 0.3 wt.% with up to 1.1 wt.% detected in Fe-dominant splatter but with none detected in TeH Pt-dominant nuggets. Mo also is not reported in any terrestrial placer nuggets and occurs in low concentrations (less than ~ 0.02 wt.%) in iron meteorites. In contrast, Mo is reported at high concentrations in PGE nuggets from carbonaceous chondrites (~ 11.5 wt.%), cosmic spherules (0.6 wt.%), and cometary material (5.8 wt.%). Thus, the Mo content of TeH splatter appears dissimilar to terrestrial material but overlaps values of known cosmic material, suggesting an extraterrestrial origin.
Based on the volume and weight of the meltglass, we estimate that the extraterrestrial-like metallic TeH Fe-dominant splatter represents 102.
We also investigated nuggets that lack PGEs. The geochemistry of these nuggets shows two distinct populations, one Ni-dominant and one Fe-dominant (Fig. 34g–i). Twelve measurements of TeH samples show enrichments in Ag averaging 5.7 wt.%, Au = 0.6 wt.%, Cr = 2.2 wt.%, Cu = 2.8 wt.%, and Ni = 3.7 wt.%. All particles appear to have been melted at high temperatures: silver at ~ 961 °C; gold at 1064 °C; chromium at 1907 °C; copper at 1085 °C; and nickel at 1455 °C.
Ternary diagrams for Cr, Fe, and Ni show that some TeH nuggets exhibit chemical similarities to mineral deposits in Greece, Turkey, and Oman, suggesting that some nuggets are of terrestrial origin. However, other TeH nuggets are chemically similar to materials found in iron meteorites, chondrites, achondrites, and comets (Supporting Information, Fig. S9). When compared to meteoritic material, the Ni-dominant group roughly corresponds to measurements from sulﬁde inclusions in chondrites132, and the Fe-dominant group overlaps both chondritic sulﬁde inclusions and metal-rich grains from Comet Wild 2133. These results suggest that a small fraction (
Importantly, the PGE-rich nuggets and splatter were observed embedded only on melted surfaces of the TeH meltglass but not inside the vesicles or within the meltglass. This suggests that the nuggets and splatter were not contained in the original sedimentary matrix but were fused onto the TeH glass while still molten. Geochemical analyses suggest a dual origin. The Pt-dominant nuggets do not match known extraterrestrial material and instead appear to be of terrestrial origin, possibly from placer deposits and regional mines. It is unclear exactly how they became embedded onto but not inside TeH meltglass, but one possibility is that they were originally buried as river-laid, PGE-rich placer deposits. If so, we propose that they were ejected during the impact event and distributed across the molten glass by the impact blast wave. Another possibility is that they derive from jewelry and raw precious metals in the palace complex that were pulverized and dispersed during the high-velocity destruction of the palace.
In contrast, the Fe-dominant nuggets fused into the surfaces of TeH meltglass closely match the composition of nuggets from chondritic meteorites, cosmic spherules, and comets, consistent with an extraterrestrial origin. The data suggest that a carbonaceous chondrite or a comet detonated in the air near TeH, pulverized PGE-rich nuggets within the bolide, accreted terrestrial placer nuggets, and dusted both terrestrial and extraterrestrial material across the surfaces of molten mudbricks, pottery, and building plaster at low concentrations of
For TeH bulk sediment, neutron activation analyses show Pt abundance peaks in the destruction layer of all proﬁles tested (Fig. 37a–d) at ~ 2× to 8× an average crustal abundance of 0.5 ppb. Sedimentary Ir was below detection at S3). Also, Pt/Pd ratios in bulk sediment from the destruction layers are anomalously higher than background layers by ~ 4× to 14× (Fig. 37e–h).
Abundances of Pt, Ir, and the Pt/Pd ratios all peak in sediment at or near the top of the destruction layer, suggesting an inﬂux of those elements at 1650 BCE, most likely from both extraterrestrial and terrestrial sources. Sedimentary concentrations of Ir only peak in the wadi samples and were not detectable at the other three sites for unknown reasons.
The interior portions of many melted pieces of mudbrick, roofing clay, and pottery are highly vesicular, and the walls of these vesicles nearly always display an array of metal-rich crystals (Fig. 38). These include elemental iron and iron oxides, labeled as FeO, but is actually oxidized as hematite (FeO) with a melting point of ~ 1565 °C; magnetite (FeO) that melts at ~ 1590 °C; and/or elemental iron (Fe) melting at ~ 1538 °C (Fig. 38a, b, d, f, g). Also observed on vesicle walls were crystals of Fe phosphide (FeP) that melts at ~ 1100 °C (Fig. 38c, g), manganese oxide (MnO) at ~ 1945 °C (Fig. 38d), calcium phosphate (Ca(PO)) at ~ 1670 °C (Fig. 38e), and calcium silicate (CaSiO) at ~ 2130 °C (Fig. 38e).
In some cases, these mineral crystals appear to have crystallized from the molten matrix as it solidiﬁed. In other cases, it appears that they plated onto the vesicle surface, suggesting that they may have condensed through vapor deposition from the high-temperature, mineral-saturated atmosphere within the vesicle.
SEM inspection of the palace mudbrick meltglass revealed partially melted grains of magnetite (FeO) with a melting point of ~ 1590 °C (Fig. 39a, b) and titanomagnetite (TiFeO) with a melting point of ~ 1550 °C (Fig. 39c, d). The latter is an oxyspinel that commonly occurs as discrete grains or as an exsolution product within magnetite. The chemical composition of magnetite at equilibrium is 72.36 wt.% Fe and 27.64 wt.% O. Titanomagnetite is 21.42 wt.% Ti, 49.96 wt.% Fe, and 28.63 wt.% O. SEM–EDS analysis conﬁrms similar compositions for the observed TeH grains.
These grains display bubble-rich features that are commonly associated with grain fractures. There are several possibilities. (i) Approximately 20% of the time, magnetite grains are reported to be naturally overprinted by porous magnetite as a precipitation product, creating a bubble-like texture134. (ii) Alternatively, these features may be textures caused by differential dissolution135,136. (iii) The grains may have been exposed to temperatures equal to or greater than their melting point, causing the outgassing of volatiles or the rapid reduction of iron oxides. Because of the morphological dissimilarity of TeH grains to published examples of grains altered by precipitation and dissolution, we infer that these grains were altered by exposure to high temperatures.
Sulﬁde and phosphide grains were found attached to the walls of the vesicles within mudbrick meltglass. SEM–EDS analyses of palace mudbrick meltglass identiﬁed melted Fe sulﬁde (FeS), also known as troilite (Figs. 40, 41), with a composition of 63.53 wt.% Fe and 36.47 wt.% S and a melting point of ~ 1194 °C. Commonly found associated with the Fe sulﬁde, Fe phosphide (FeP; Fig. 40b, e) is a nickel-poor variety of barringerite, a mineral ﬁrst identiﬁed at Meteor Crater in Arizona. Another variant of Fe phosphide, FeP, was also identiﬁed in melted mudbrick from the palace. Both phosphide variants melt at ~ 1100 °C. Fe phosphide is common in meteorites, but although terrestrially rare, FeP is found in pyrometamorphic rocks, such as are found in the Hatrurim Formation in nearby Israel137. Britvin et al.137 report that the local Fe phosphide displays averages for Fe at 76.4 wt.% and P at 21.4 wt.%, with small amounts of Ni, Co, and Cr at 2.2 wt.%. The composition of Fe phosphide found at TeH is comparable, averaging 78.3 wt.% Fe and 21.7 wt.% P. However, unlike those FeP grains from Israel, the TeH grains lack detectable Ni, Co, and Cr.
Visual inspection indicated that the sulﬁde and phosphide grains were attached to the inner surfaces of vesicles, and therefore, most likely formed by vapor deposition at > 1100 °C, rather than by crystallization from the melted matrix. Troilite (FeS) is very rare terrestrially but common in meteoritic material81,138. Harris et al.138 reported ﬁnding inclusions of troilite (FeS) as meteoritic clasts in Chilean meltglass proposed to have derived from a cosmic airburst that left meltglass on the surface along a 19-km-long stretch of the Atacama Desert approximately 12,800 years ago. At that site, troilite is typically found lining the walls of vesicles in the meltglass, as is the case for TeH meltglass.
SEM–EDS analyses also show that vesicles in melted mudbrick from the palace contain calcium phosphide (CaP) (Fig. 42). This mineral has a stoichiometric composition of ~ 66.0 wt.% Ca and ~ 34.0 wt.% P and melts at ~ 1600 °C (Table 1). Unlike Fe sulﬁde and Fe phosphide discussed above that appear to have formed by vapor deposition, these examples most likely crystallized from the molten Ca–Al–Si matrix material.
SEM–EDS analyses of palace mudbrick meltglass (Fig. 43a, b) and melted pottery (Fig. 43c) reveal the presence of Ca silicate, also known as wollastonite (CaSiO), with a composition of 48.3 wt.% CaO and 51.7 wt.% SiO and a melting point ~ 1540 °C (Table 1). These crystals are mostly found on broken surfaces of the matrix but also are sometimes observed inside vesicles. Unlike Fe sulﬁde and Fe phosphide above, which appear to have formed by vapor deposition, these crystals appear to have condensed from the molten matrix as it cooled.
Hey everyone, it’s been a while. Hope you’re all doing well. Today I want to show you some more watch related stuff I’ve been working on.
I’ve already shown you the polarizer mod which inverts the display, making it black, but I always wanted a good way to change the display to other colors, and I’ve been experimenting for a long time on how to do just that.
One day I decided to see what the kapton tape I use for my 3D printer bed looks like when applied to the display, and to my surprise it turned out great, transforming the boring regular LCD into a nice amber color.
I then started searching for other polyimide-like tapes in different colors and found a few. From my experiments, anything around 20 microns or thinner could potentially work.
One really cool thing is you can combine some of the different tapes over each other to create a new color. For example the amber + blue tapes create a really vibrant green display. I’ve found this only works for really thin tapes though.
So far I have managed to change the display into a range of colors, including, amber, blue, pink, red and green.
To get the effect you need to apply the tape to the LCD glass. It can be tricky, and I use a plastic credit card to smooth the tape as it’s stuck down. This removes any air bubbles.
Another interesting mod is to make the screen transparent. To do this, you need to remove the reﬂective foil that’s glued to the back of the LCD glass. I did this using a sharp knife and some adhesive remover. You then replace it with polarizing ﬁlm, so there are polarizers on both sizes of the glass.
You also need to paint the inside of the watch PCB white, since if you don’t there won’t be enough contrast between the display and the insides to read the numbers. I used simple enamel touch-up paint with built in brush.
Be sure to only cover the area shown, avoiding the various contacts required for the watch to still function.
The ﬁnal effect, when paired with the F-91W variant which has a transparent strap is definitely unique. I changed the LED to a white one, so When you use the LED in the dark, the light illuminates the entire space below the glass.
I have updated the micro SD socket add on too, so it’s now completely standalone from the backplate and can be attached easily using 4 x M1.4x6MM screws.
The middle of the 3D printed frame is intentionally thin, and bends over the internals of the watch, providing just enough room to ﬁt the socket inside. It’s definitely handy to have a backup memory card on you just in case you need it, especially with micro SD card storage capacity now being up in terabytes.
One thing to note. When you’re attaching this, remember to bend the contacts from the watch upwards, so they touch the inside of the metal backplate. This is the piezo which creates the watches beeping sound.
The strap mechanism on the F91W isn’t the regular spring loaded type you ﬁnd on most watches, and can be a bit of a pain to remove by hand, so I ﬁgured I’d 3D print a tool to simplify the process.
It basically involves disassembling two cheap generic bracelet link removers, and inserting them into a custom 3D printed jig. The threaded pins are held in place by two 5mm square nuts.
To remove the strap you just place the watch in the jig, with the threaded pins on the right side. Then you slowly turn the screw and this will pop the friction pins out of the strap.
To replace the strap, you simply add the straps and pins to the watch body as shown. Make sure the pins are hanging out a little on the left side, then place the watch into the jig, this time with the threaded pins on the left side. You can use the screws to push the strap pins so they’re secured to the watch body.
If you’re gonna replace the strap with a non Casio one, it might use a spring loaded pin instead of a friction one, so follow the instructions from the strap maker instead of using this jig.
I found a mod on Instructables which shows you how to add a second LED to the F91W, and I tried creating a simple ﬂex PCB to make the process easier. It was so simple infact that I messed it up because I wasn’t paying attention before I sent it off to the manufacturer.
Anyways, after correcting the mistake, the mod does work, but I’m not sure why it makes the watch so unstable. Sometimes it works ﬁne, and others it keeps resetting or turning the display off. I’m guessing the Casio chip inside does not like the changes in power consumption with that second LED. Gotta admit it looks really nice in the dark though.
I also tried a couple different NFC antenna designs with ﬂex PCBs so that I could install it inside the watch body without removing the frontplate. Unfortunately there just wasn’t enough room to make the antenna strong enough, but perhaps there are other solutions I haven’t thought of yet.
If you’d like to make your own stuff, source ﬁles etc are at the top of the page, or if you’d like to buy a pre-modded watch, or any of the tools I mentioned in the video, check out the NODE shop.
I’ve also been getting more into modding analog watches too, and have created a few minimalist watchface mods for the Casio MQ24, which is basically the analog equivalent of the F91W. I’m interested to see what else is possible with these.
Young investors have a new strategy: watching ﬁnancial disclosures of sitting members of Congress for stock tips.
Among a certain community of individual investors on TikTok, House Speaker Nancy Pelosi’s stock trading disclosures are a treasure trove. “Shouts out to Nancy Pelosi, the stock market’s biggest whale,” said user ‘ceowatchlist.’ Another said, “I’ve come to the conclusion that Nancy Pelosi is a psychic,” while adding that she is the “queen of investing.”
“She knew,” declared Chris Josephs, analyzing a particular trade in Pelosi’s ﬁnancial disclosures. “And you would have known if you had followed her portfolio.”
Last year, Josephs noticed that the trades, actually made by Pelosi’s investor husband and merely disclosed by the speaker, were performing well.
Josephs is the co-founder of a company called Iris, which shows other people’s stock trades. In the past year and a half, he has been taking advantage of a law called the Stock Act, which requires lawmakers to disclose stock trades and those of their spouses within 45 days.
Now on Josephs’ social investing platform, you can get a push notiﬁcation every time Pelosi’s stock trading disclosures are released. He is personally investing when he sees which stocks are picked: “I’m at the point where if you can’t beat them, join them,” Josephs told NPR, adding that if he sees trades on her disclosures, “I typically do buy… the next one she does, I’m going to buy.”
A Pelosi spokesperson said that she does not personally own any stocks and that the transactions are made by her husband. “The Speaker has no prior knowledge or subsequent involvement in any transactions,” said the spokesperson.
Related Story: Planet Money Summer School
Related Story: A Chinese Real Estate Company Is Walloping Your Stocks. Here’s Why
Still Josephs views trades by federal lawmakers as “smart money” worth following and plans to track a large variety of politicians. “We don’t want this to … be a left vs. right thing. We don’t really care. We just want to make money,” he said.
Pelosi is hardly the only lawmaker making these stock disclosures. So far this year, Senate and House members have ﬁled more than 4,000 ﬁnancial trading disclosures — with at least $315 million of stocks and bonds bought or sold. That’s according to Tim Carambat, who in 2020 created and now maintains two public databases of lawmaker ﬁnancial transactions — House Stock Watcher and Senate Stock Watcher. He says there is a significant following for his work.
“I knocked out a very, very simple version of the project in like a couple of hours. And I posted it actually to Reddit, where it gained some significant traction and people showed a lot of interest in it,” Carambat said.
Dinesh Hasija, an assistant professor of strategic management at Augusta University in Georgia, has been studying whether the market moves based on congressional disclosures. His ongoing research suggests that it does.
“Investors perceive that senators may have insider information,” he said. “And we see abnormal positive returns when there’s a disclosure by a senator.”
In other words, Hasija’s research shows that after the disclosures are published, there’s a bump in the price of stocks bought by lawmakers.
At least one ﬁnancial services consultant, Matthew Zwijacz, is planning to set up a ﬁnancial instrument that automatically tracks congressional stock picks, because, in his view, lawmakers are “probably privy to more information than just the general public.”
Both investors and government watchdogs are interested in these trades because of the possibility that lawmakers could use the private information they obtain through their jobs for money-making investment decisions.
“If the situation is that the public has lost so much trust in government that they think … the stock trades of members are based on corruption, and that [following that] corruption could beneﬁt [them]. … We have a significant problem,” said Kedric Payne, senior director of ethics at the Campaign Legal Center.
A surge of interest following congressional ﬁnancial disclosures came near the beginning of the COVID-19 pandemic, when a ﬂurry of reports indicated that lawmakers sold their stocks right before the ﬁnancial crash.
NPR reported how Senate Intelligence Committee Chairman Richard Burr privately warned a small group of well-connected constituents in February 2020 about the dire effects of the coming pandemic. He sold up to $1.72 million worth of personal stocks on a single day that same month.
A bipartisan group of senators also came under suspicion, including Sens. Dianne Feinstein, James Inhofe and Kelly Loefﬂer. After investigations by federal law enforcement, none were charged with insider trading — a very difﬁcult charge to make against a sitting lawmaker.
Congressman Raja Krishnamoorthi, a Democrat from Illinois, is part of a bipartisan group of House and Senate members who have introduced legislation banning lawmakers from owning individual stocks. He has run up against a lot of opposition to the idea.
“As I understand it, one of the perks of being a member of Congress, especially from the late 1800s on, was to be able to trade on insider information. That was a perk of being in Congress. And that has got to come to an end,” Krishnamoorthi said.
Polling shows that there is wide support for enacting this prohibition. According to a survey done this year by Data for Progress, 67% of Americans believe federal lawmakers should not own individual stocks.
There’s a deep cynicism that forms the foundation of a trading strategy based on mimicking the stock picks of lawmakers and their spouses: the notion that politicians are corrupt and that you can’t trust them not to engage in insider trading — so if the information is public, you might as well trade what they’re trading.
But despite all the skepticism about politicians and their ethical standards, the evidence doesn’t show that members of Congress make great stock pickers. While a 2004 paper found that senators generally outperformed the market, more recent academic studies in 2013 and over the last few years have suggested lawmakers are not good at picking stocks.
“Those papers have found that in fact, the trades made by senators have underperformed,” Hasija said.
This means if you ever take a stock tip from a lawmaker — cynicism aside — it might not be a very good trade.
During the pandemic, beginner investors jumped into the market, fueled by new apps and widely available data. And they’ve got a new strategy - taking stock tips from sitting members of Congress. NPR investigative correspondent Tim Mak has more.
TIM MAK, BYLINE: Among a group of retail investors on TikTok, Speaker Nancy Pelosi’s stock trading disclosures are a treasure trove.
UNIDENTIFIED PERSON #1: Shouts out to Nancy Pelosi, the stock market’s biggest whale.
UNIDENTIFIED PERSON #2: So I’ve come to the conclusion that Nancy Pelosi is a psychic and she can guess when a stock is going to pop.
CHRIS JOSEPHS: She knew. And you would have known if you followed her portfolio on Iris. Come do it. I have a group chat going…
MAK: That last voice was Chris Josephs, the co-founder of a company that shows other people’s stock trades. In the last year and a half, he’s been taking advantage of a law requiring lawmakers to disclose their stock trades and those of their spouses within 45 days. It’s called the Stock Act.
JOSEPHS: When Nancy Pelosi started being right on everything, it started with CrowdStrike. Then she made a big - or her husband made big bets on Tesla and then Google.
MAK: In 2020, Josephs noticed that the trades, actually made by Pelosi’s investor husband and disclosed by the speaker, were really good. Now on the platform he’s created, you can get a push notiﬁcation every time Pelosi’s stock disclosures are released. Josephs plans to track a large variety of federal politicians.
JOSEPHS: We don’t want this to obviously be a left versus right thing. We don’t really care. We just want to make money.
MAK: A Pelosi spokesperson said that she does not personally own any stocks and that the transactions are made by her husband. She’s not the only lawmaker who is ﬁling these disclosures. So far this year, Senate and House members have ﬁled more than 4,000 ﬁnancial trading disclosures, with at least $315 million of stocks and bonds bought or sold. Both investors and government watchdogs are interested in these trades because lawmakers could use information they get on the job to make lucrative stock decisions. NPR, for example, reported how Senate Intelligence Committee Chairman Richard Burr privately warned a small group of well-connected constituents back in February 2020 about the dire effects of the coming pandemic.
RICHARD BURR: There’s one thing that I can tell you about this. It is much more aggressive in its transmission than anything that we have seen in recent history.
MAK: He also sold up to $1.72 million worth of personal stocks on a single day that February. Other senators soon came under suspicion.
UNIDENTIFIED PERSON #3: Dianne Feinstein and James Inhofe and Georgia Senator Kelly Loefﬂer allegedly sold off stocks within days of a classiﬁed briefing about the coronavirus.
MAK: But after investigations by federal law enforcement, none were charged with insider trading. It’s a very difﬁcult charge to make against a sitting lawmaker. James Kardatzke is the CEO of Quiver Quantitative, a data platform which has also started collecting and presenting details from congressional trading disclosures.
JAMES KARDATZKE: Obviously our lawmakers have access to a lot of information that isn’t readily available to all of us, and I think it’s only natural to assume that some of them may be using that to drive their own investment decisions.
MAK: There’s a deep cynicism that forms the foundation for this trading strategy. Politicians are corrupt, and you can’t trust them not to engage in insider trading. If the information is public, you might as well trade what they’re trading.
RAJA KRISHNAMOORTHI: In this country, people already are deeply alienated from our economic system, and they’re increasingly alienated from our political system.
MAK: That’s Congressman Raja Krishnamoorthi, a Democrat from Illinois. Along with a bipartisan group in the House and Senate, he has introduced legislation banning lawmakers from owning individual stocks. He’s run up against a lot of opposition to the idea.
KRISHNAMOORTHI: As I understand it, one of the perks of being a member of Congress, especially from the late 1800s on, was to be able to trade on insider information. And that has got to come to an end.
MAK: According to a survey done this year by Data for Progress, 67% of Americans believe federal lawmakers should not own individual stocks. But while lawmakers can, the public is taking advantage of the situation. Professor Dinesh Hasija is an assistant professor at Augusta University, and he’s been researching whether the market moves based on congressional disclosures.
DINESH HASIJA: We see an abnormal positive returns when there’s a disclosure by a senator.
MAK: After the disclosures come out, there’s a bump in the price of stocks bought by lawmakers. We even spoke to one ﬁnancial services consultant planning to set up a ﬁnancial instrument that automatically tracks congressional stock picks. But for all the cynicism about politicians, academic studies over the last few years have suggested lawmakers are not so good at picking stocks. Here’s Professor Hasija again.
HASIJA: Those papers have found that in fact, the trades made by senators have underperformed.
MAK: Which means if you ever take stock tips from a member of Congress, cynicism aside, it might not be a very good trade.
Manyverse is a social networking app with features you would expect: posts, likes, proﬁles, private messages, etc. But it’s not running in the cloud owned by a company, instead, your friends’ posts and all your social data live entirely in your phone. This way, even when you’re ofﬂine, you can scroll, read anything, and even write posts and like content! When your phone is back online, it syncs the latest updates directly with your friends’ phones, through a shared local Wi-Fi or on the internet. We’re building this free and open source project as a community effort because we believe in non-commercial, neutral, and fair mobile communication for everyone. No ads.
No pay wall.
No data centers.
No cloud. No cookies.
No company. No investors.
No token. No ICO. No blockchain.
No tracking. No spying. No analytics.
No tedious registration. No premium costs.
No annoying notiﬁcations, emails, and banners.Scuttlebutt is the most fun part of the Internet for me. It feels like living in the future. It has a melange of different groups and cultures all acting together towards building a beautiful community. I love it.#scuttleverse Oh how you have grown. Also just read the #patchwork release notes. Great work and Thank you! I love this community :DBesides the warm and honest and diverse and funny community, Scuttlebutt has the most collaborative gang of software developers in the known universe.Over 3 years ago, I joined Scuttlebutt with dreams of a better future, where our technology includes humanity, where our economies care for relationships, where our spirits celebrate abundance, and now these dreams are alive. ☀️🏡🌈Manyverse already works, but it is still in beta. So far we have built:This is just the beginning. We have many more features planned in the roadmap, but we will need your help to get there.Read our blog to keep up with updates to this project!Donate to buy us time, or contribute to our open sourceLet’s do this together!Donate
Your submission has been sent. Oops! Something went wrong while submitting the form.
Taming Go’s Memory Usage, or How We Avoided Rewriting Our Client in RustA couple months ago, we faced a question many young startups face. Should we rewrite our system in Rust?
At the time of the decision, we were a Go and Python shop. The tool we’re building passively watches API trafﬁc to provide “one-click,” API-centric visibility, by analyzing the API trafﬁc. Our users run an agent that sends API trafﬁc data to our cloud for analysis. Our users were using us to watch more and more trafﬁc in staging and production—and they were starting to complain about the memory usage.
This led me to spend 25 days in the depths of despair and the details of Go memory management, trying to get our memory footprint to an acceptable level. This was no easy feat, as Go is a memory-managed language with limited ability to tune garbage collection.
Spoiler: I emerged victorious and our team still uses Go. We managed to tame Go’s memory management and restore an acceptable level of memory usage.
Especially since I hadn’t found too many blog posts to guide me during this process, I thought I’d write up some of the key steps and lessons learned. I hope this blog post can be helpful to other people trying to reduce their memory footprint in Go!The Akita command-line agent passively watches API trafﬁc. It creates obfuscated traces in Akita’s custom protobuf format to send to the Akita cloud for further analysis, or captures HAR ﬁles for local use. The initial version of the CLI preceded my time at Akita, but I became responsible for making sure the trafﬁc collection scaled to our users’ needs. The decision to use Go made it possible to use GoPacket, as described in Akita’s previous blog entry Programmatically Analyze Packet Captures with GoPacket. This was much easier than trying to write or adapt other TCP reassembly code. But, once we started capturing trafﬁc from staging and production environments, instead of just manual testing and continuous integration runs, the footprint of the collection agent became much more important.
One day this past summer, we noticed that the Akita CLI, normally well-behaved while collecting packet traces, would sometimes balloon to gigabytes of memory, as measured by resident set size of the container.
Our memory spikes at the beginning of this endeavor. We heard from our users shortly after this and the task at hand became clear: reduce the memory footprint to a predictable, stable amount. Our goal was to be similar to other collection agents such as DataDog, which we also run in our environment and could use for a comparison.
This was challenging while working within the constraints of Go. The Go runtime uses a non-generational, non-compacting, concurrent mark-and-sweep garbage collector. This style of collection avoids “stopping the world” and introducing long pauses, which is great! The Go community is justiﬁably proud that they have achieved a good set of design trade-offs. However, Go’s focus on simplicity means that there is only a single parameter, SetGCPercent, which controls how much larger the heap is than the live objects within it. This can be used to reduce memory overhead at the cost of greater CPU usage, or vice versa. Idiomatic usage of Go features such as slices and maps also introduce a lot of memory pressure “by default” because they’re easy to create.
When I was programming in C++, memory spikes were also a potential problem, but there were also many idiomatic ways to deal with them. For example, we could specialize memory allocations or limit particular call sites. We could benchmark different allocators, or replace one data structure by another with better memory properties. We could even change our behavior (like dropping more packets) in response to memory pressure.
I’d also helped debug similar memory problems in Java, running in the constrained environment of a storage controller. Java provides a rich ecosystem of tools for analyzing heap usage and allocation behavior on a running program. It also provides a larger set of knobs to control garbage collector behavior; the one I really missed is simply setting a maximum size. For our application, it would be acceptable to simply exit when memory usage gets too large, rather than endangering the stability of a production system by requiring a container limit to kick in.
But for my current problem, I could not give hints to the garbage collector about when or how to run. Nor could I channel all memory allocation into centralized control points. The two techniques I had are the obvious ones, but hard to carry out in progress:Reduce the memory footprint of live objects. Objects that are actively in use cannot be garbage collected, so the ﬁrst place to decrease memory usage is to reduce the size of those.Reduce the total number of allocations performed. Go is concurrently garbage collecting as the program runs to reclaim memory that is not in use. But, Go’s design goal is to impact latency as little as possible. Not only does it take a while for Go to catch up if the rate of allocation temporarily increases, but Go deliberately lets the heap size increase so that there are no large delays waiting for memory to become available. This means that allocating lots of objects, even if they’re not all live at the same time, can cause memory usage to spike until the garbage collector can do its job.
As a case study, I’ll walk through the areas in the Akita CLI where I could apply these ideas.
Our ﬁrst proﬁles, using the Go heap proﬁler, seemed to point to an obvious culprit: the reassembly buffer.As described in an earlier blog post, we use gopacket to capture and interpret network trafﬁc. Gopacket is generally very good about avoiding excess allocations, but when TCP packets arrive out-of-order, it queues them in a reassembly buffer. The reassembly code originally allocated memory for this buffer from a “page cache” and maintained a pointer to it there, never returning memory to the garbage collector.
Our ﬁrst theory is that a packet that is received by the host, but dropped by our packet capture, could cause a huge, persistent spike in memory usage. Gopacket allocated memory to hold data that has been received out-of-order; that is, the sequence numbers are ahead of where the next packet should be. HTTP can use persistent connections, so we might see megabytes, or indeed even gigabytes, of trafﬁc while gopacket is patiently waiting for a retransmission that will never occur. This leads to both high usage immediately (because of the large amount of buffered data) and persistently (because the page cache is never freed).
We did have a timeout that forced gopacket to deliver the incomplete data, eventually. But this was set to a fairly long value, much longer than any reasonable round-trip time for real packet retransmission on a busy connection. We also had not used the settings available in gopacket to limit the maximum reassembly buffer within each stream, or the maximum “page cache” used for reassembly. This meant that the amount of memory that could be allocated had no reasonable upper limit; we were at the mercy of however fast packets arrived before our timeout.
To ﬁnd a reasonable value to curb memory usage, I looked at some of the data from our system to try to estimate a per-stream limit that would be small but still large enough to handle real retransmissions. One of our incidents where memory spiked had demonstrated 3GB growth in memory usage over a period of 40 seconds, or a data rate of about 75MByte / second. A back of the envelope calculation suggested that at that data rate, we could tolerate even a 100ms round-trip time with just 7.5 MB of reassembly buffer per connection. We reconﬁgured gopacket to use a maximum of 4,000 “pages” per connection (each 1900 bytes, for reasons I do not understand), and a shared limit of 150,000 total pages—about 200MB.
Unfortunately, we couldn’t just use 200MB as a single global limit. The Akita CLI sets up a different gopacket reassembly stream per network interface. This allows it to process different interfaces in parallel, but our budget for memory usage has to be split up into separate limits for each interface. Gopacket doesn’t have any way to specify a page limit across different assemblers. (And, our hope that most trafﬁc would arrive only over a single interface was quickly disproven.) So this meant that instead of having a 200MB budget to deal with actual packet loss, the actual memory available to the reassembly buffer could be as low as 20MB—enough for a few connections, but not a lot. We didn’t end up solving this problem; we dynamically divided up the 200MB equally among as many network interfaces as we were listening on.We also upgraded to a newer version of gopacket that allocated its reassembly buffers from a sync.Pool. This class in the Go standard library acts like a free list, but its contents can be eventually reclaimed by the garbage collector. This meant that even if we did encounter a spike, the memory would eventually be reduced. But that only improves the average, not the worst-case.
Reducing these maximums got us away from those awful 5 GiB memory spikes, but we were still sometimes spiking over 1GiB. Still way too large.Playing with the observations in DataDog for a while convinced me that these spikes were correlated with bursts of incoming API trafﬁc.
To help Akita users get more control over our agent’s memory footprint, we made the network processing parameters tunable via command-line parameters not listed in our main help output. You can use –gopacket-pages to control the maximum size of the gopacket “page cache”, while –go-packet-per-conn controls the maximum number of pages a single TCP connection may use.We also expose the packet capture “stream timeout” as –stream-timeout-seconds, which controls how long we will wait, just as –go-packet-per-conn controls how much data we will accumulate.
Finally, –max-http-length controls the maximum amount of data we will attempt to capture from an HTTP request payload or response body. It defaults to 10MB.
Since ﬁxing the buffer situation didn’t completely solve the memory problem, I had to keep looking for places to improve our memory footprint. No other single location was keeping a lot of memory live.
In fact, even though our agent was using up to gigabytes of memory, whenever we looked at Go’s heap proﬁle we never caught it “in the act” with more than a couple hundred MB of live objects. Go’s garbage collection strategy ensures that the total resident memory is about twice the amount occupied by all live objects—so the choice of Go is effectively doubling our costs. But our proﬁle wasn’t ever showing us 500MB of live data, just a little bit more than 200MB in the worst cases. This suggested to me that we’d done all we could with live objects.
It was time to shift focus and look instead at total allocations. Fortunately, Go’s heap proﬁler automatically collects that as part of the same dump, so we can dig into that to see where we’re allocating a lot of memory, creating a backlog for the garbage collector. Here’s an example showing some obvious places to look (also available in this Gist):One heap proﬁle showed that 30% of allocations were under regexp.compile. We use regular expressions to recognize some data formats. The module that does this inference was recompiling those regular expressions each time it was asked to do the work:It was simple to move the regular expressions into module-level variables, compiled once. This meant that we were no longer allocating new objects for the regular expressions each time, reducing the number of temporary allocations.
This part of the work felt somewhat frustrating, because although nodes dropped off the allocation tree, it was harder to observe a change in end-to-end memory usage. Because we were looking for spikes in memory usage, they didn’t reliably occur on demand, and we had to use proxies like local load testing.The intermediate representation (IR) we use for the contents of requests and responses has a visitor framework. The very top source of memory allocation was allocating context objects within the visitor, which keep track of where in the intermediate representation the code is currently accessing. Because the visitor uses recursion, we were able to replace these using a simple preallocated stack. When we visit one level deeper in the IR, we allocate a new entry by incrementing an index to the preallocated range of context objects (and expanding it if necessary.) This converts dozens or even hundreds of allocations to just one or two.
A proﬁle from before the change showed 27.1% of allocations coming from appendPath. One immediately after the change showed only 4.36% instead. But, although the change was large, it was not as big as I expected. Some of the memory allocation seemed to “shift” to a function that hadn’t been a major contributor before!Switching go tool pprof to granularity=lines causes it to show line-by-line allocation counts instead of function-level totals. This helped identify a couple sources of allocation that were previously hidden within appendPath, such as creating a slice containing the entire path back to the root. Even though multiple slices can re-use the same underlying array, if there is available capacity in the shared object, it was a big win to construct these slices lazily on demand, instead of every time we switched contexts.
While these preallocations and deferred allocations had a big impact on the amount of memory allocated, as reported by the proﬁling, it did not seem to do a lot for the size of the spikes we observed. This suggests that the garbage collector was doing a good job reclaiming these temporary objects promptly. But making the garbage collector work less hard is still a win both in understanding the remaining problems, and in CPU overhead.
We used deepmind/objecthash-proto to hash our intermediate representation. These hashes are used to deduplicate objects and to index unordered collections such as response ﬁelds. We had previously identiﬁed this as a source of a lot of CPU time, but it showed up as a large allocator of memory as well. We had already taken some steps to avoid re-hashing the same objects more than once, but it was still a major user of memory and CPU. Without a major redesign to our intermediate representation and on-the-wire protocol, we weren’t going to be able to avoid hashing.
There were a few major sources of allocations in the hashing library. objecthash-proto uses reﬂection to access the ﬁelds in protobufs, and some reﬂection methods allocate memory, like reﬂect.packEface in the proﬁle above. Another problem was that in order to consistently hash structures, objecthash-proto creates a temporary collection of (key hash, value hash) pairs, and then sorted it by key hash. That showed up as bytes.makeSlice in the proﬁle. And we have a lot of structures! A ﬁnal annoyance is that objecthash-proto marshals every protobuf before hashing it, just to check if it’s valid. So a fair amount of memory was allocated and then immediately thrown away.
After nibbling around the edges of this problem, I decided it would be better to generate functions that did the hashing just on our structures. A great thing about objecthash-proto is that it works on any protobuf! But we didn’t need that, we just needed our intermediate representation to work. A quick prototype suggested it would be feasible to write a code generator that produced the same hashes, but did so in a more efﬁcient way:Precompute all the key hashes and refer to them by index. (The keys in a protobuf structure are just small integers.)Visit the ﬁelds in a structure in the sorted order of their key hashes, so that no buffering and sorting was necessary.Access all the ﬁelds in structures directly rather than with reﬂection.
All of these reduced memory usage to just the memory required for individual hash computations in the OneOfOne/xxhash library that objecthash-proto was using. For maps, we had to fall back to the original strategy of sorting the hashes, but fortunately our IR consists of relatively few maps.
This was the work that ﬁnally made a visible difference in the agent’s behavior under load.I did it! It was the hashing.Now the allocation proﬁle showed mainly “useful” work we weren’t going to be able to avoid: allocating space for packets as they came in.
We weren’t quite done. During this whole process, what I really wanted the heap proﬁle to tell me is “what got allocated just before Go increased the size of its heap?” Then I would have a better idea what was causing additional memory to be used, not just which objects were live afterwards. Most of the time, it was not new “permanent” objects that caused an increase, but allocation of temporary objects. To help answer this question and identify those transient allocations, I collected heap proﬁles from an agent in our production environment every 90 seconds, using the HTTP interface to the Go proﬁler.
Then, when I saw a spike in memory usage, I could go back to the corresponding pair of traces just before and just after. I could look at the allocations done within that 90 seconds and see what differed from the steady state. The pprof tool lets you take the difference between one trace and another, simplifying this analysis. That turned up one more place that needed to be limited in its memory usage:This shows that 200 MB—as large as our whole maximum reassembly buffer—was being allocated within just 90 seconds! I looked at the backtrace from io.ReadAll and discovered that the reason for the allocation was a buffer holding decompressed data, before feeding it to a parser. This was a bit surprising as I had already limited the maximum size of an HTTP request or response to 10MB. But that limit counted compressed size, not uncompressed size. We were temporarily allocating large amounts of memory for the uncompressed version of an HTTP response.
This prompted two different sets of improvements:For data that we cared about, use a Reader rather than a byte to move data around. The JSON and YAML parsers both accept a Reader, so the output of the decompression could be fed directly into the parser, without any extra buffer.For data that we weren’t going to be able to fully parse anyway, we imposed a limit on the decompressed size. (Akita attempts to determine whether a textual payload can be parsed into a recognized format, but the amount of data we need to do that is small.)
Should we have switched to Rust instead?While these improvements may seem obvious in hindsight, there were definitely times during the Great Memory Reduction when the team and I considered rewriting the system in Rust, a language that gives you complete control over memory.
Our stance on the Rust rewrite had been as follows:🦀 PRO-REWRITE: Rust has manual memory management, so we would avoid the problem of having to wrestle with a garbage collector because we would just deallocate unused memory ourselves, or more carefully be able to engineer the response to increased load.🦀 PRO-REWRITE: Rust is very popular among hip programmers and it seems many startup-inclined developers want to join Rust-based startups.🦀 PRO-REWRITE: Rust is a well-designed language with nice features and great error messages. People seem to complain less about the ergonomics of Rust than the ergonomics of Go.🛑 ANTI-REWRITE: Rust has manual memory management, which means that whenever we’re writing code we’ll have to take the time to manage memory ourselves.🛑 ANTI-REWRITE: Our code base is already written in Go! A rewrite would set us back several person-weeks, if not several person-months, of engineering time.🛑 ANTI-REWRITE: Rust has a higher learning curve than Go, so it would take the team (and, likely, new team members) more time to get up to speed.
And for people who thought I was joking about startups commonly facing the decision to rewrite in Rust, the Rust Rewrite Phenomenon is very real. See here:Jean recounting an actual conversation we had on our team, with proof of Rust’s popularity.And I will not name names, but if you look closely at startup job postings and even blog posts, you’ll see the “Rust rewrite” posts. 🦀
At the end of the day, I am glad I was able to get the memory footprint in Go to a reasonable level, so that we could focus on building new functionality, instead of spending a bunch of time learning a new language and porting existing functionality. If our agent had initially been written in Python rather than Go this may have been a different story, but Go is sufﬁciently low-level that I don’t anticipate there being major issues with continuing to develop in it.Today we’re able to ingest data from our own often-busy production environment while keeping the Akita CLI memory usage low. In our production environment, the 99th percentile memory footprint is below 200MB, and the 99.9th percentile footprint is below 280MB. We’ve avoided having to rewrite our system in Rust. And we haven’t had complaints in over a month.While these improvements are very speciﬁc to the Akita CLI agent, the lessons learned are not:Reduce ﬁxed overhead. Go’s garbage collection ensures that you pay for each live byte with another byte of system memory. Keeping ﬁxed overhead low will reduce resident set size.Proﬁle allocation, not just live data. This reveals what is making the Go garbage collector perform work, and spikes in memory usage are usually due to increased activity at those sites.Stream, don’t buffer. It’s a common mistake to collect the output of one phase of processing before going on to the next. But this can lead to an allocation that is duplicative of the memory allocations you must already make for the ﬁnished result, and maybe cannot be freed until the entire pipeline is done.Replace frequent, small allocations by a longer-lived one covering the entire workﬂow. The result is not very idiomatically Go-like, but can have a huge impact.Avoid generic libraries that come with unpredictable memory costs. Go’s reﬂection capabilities are great and let you build powerful tools. But, using them often results in costs that are not easy to pin down or control. Idioms as simple as passing in a slice rather than a ﬁxed-sized array can have performance and memory costs. Fortunately, Go code is very easy to generate using the standard library’s go/ast and go/format packages.
While the results we achieved are not as good as a complete rewrite in a language that lets us account for every byte, they are a huge improvement over the previous behavior. We feel that careful attention to memory usage is an important skill for systems programming, even in garbage-collected languages.
And if working on hard systems problems like these sounds fun to you, come work with me at Akita. We’re hiring!Your submission has been sent.Oops! Something went wrong while submitting the form.Your submission has been sent.Oops! Something went wrong while submitting the form.
How to Catch Breaking Changes By Watching API TrafﬁcAs a developer who has also worked in devops, Kevin Ku understands the importance of ﬁnding and ﬁxing bugs early in the development cycle. In this blog post, Kevin shows a bug that’s hard to catch with source diffs, linters, or static analysis. He shows how to use Akita to catch this bug and explains how Akita works under the hood. The Software Heterogeneity Problem, or Why We Didn’t Build on GraphQLIn this post, Jean Yang talks about the dream of one-click observability that we’re building toward, why a GraphQL-only world would certainly make that dream easier, and why the Software Heterogeneity Problem means that building on GraphQL alone is not going to be enough.Taking Types to the Next Level: Stop API Bugs By Inferring Data FormatsIn our last blog post, we talked about how to catch cross-service bugs by watching API trafﬁc. In this blog post, we show specifically how checking data formats across APIs can catch some nasty bugs.
It was around midnight on May 22nd, 2016 that I pushed my ﬁrst commit to GitHub for a new idea. I had just celebrated 3 happy years of marriage with my wife. A few hours earlier, I had been working on a desktop app with a business partner and we were weeks away from launching. So let’s start there —
I had spent the last couple weeks setting up our billing and licensing system. Billing was easy since Stripe was the new kid on the block, and devs love shiny new things (well, Stripe was “new” to me at the time). But I wasn’t thrilled about setting up licensing, and I thought it was weird that I couldn’t ﬁnd any services that offered a licensing API like Stripe’s billing API. I paused, logged a note to mentally explore the idea later, and continued writing the in-house licensing server.
Before the ﬁrst commit, I had gotten home from $WORK some odd hours earlier, ate dinner, spent time with my wife. I was now relaxing on the couch, probably watching TV or reading a sci-ﬁ book, which I loved to do. The usual. But that idea I jotted down earlier was nagging at me. During dinner, in the shower, while watching TV. I just couldn’t get it out of my head. So, I did what all hackermans do…
I grabbed my laptop and started sketching the service using Ruby on Rails. Hours turned into days turned into months. I had written side projects before, but nothing like this.
The desktop app I mentioned earlier ﬂopped for various reasons, but this new project… I really thought I had something special. (We all do, probably.)
I kept working nights and weekends, for 3 whole years I did this. “Stripe for software licensing”, I told myself. I utlimately launched around October, 2016. My 5 year “launch” anniversary is coming up. It’s been a hard journey, through late nights, through burnout, through impostor syndrome, but it’s been good.
But I’m getting ahead out myself here —
I’ve always been kind of anti-authority, I guess. I don’t like being told what to do, especially when I don’t agree with what I’m being told to do. I don’t like strict rules, and I definitely don’t like micro-managers, or managers in general, really; I kind of just want to be left alone to do what I do.
As you could imagine, this doesn’t really jive well with life as an employee. So I went through a few jobs, some of my own accord and some due to layoffs, but never ﬁred. Some I liked, some I didn’t, but regardless I always clashed with managers and micro-managing employers. (Don’t get me wrong — I was always a good, productive employee or at least tried to be.)
I enjoyed being a part of new startups, those that were still “scrappy.” But each time, once that growth-stage hit and managers started coming in to make things “more efﬁcient”, that’s when I knew that those types of places weren’t for me. I would get through my day job, and then spend the rest of my brainpower on side projects. I did this for a long time, and I struggled with burnout. It was on and off.
In 2019, I had had enough of the cycle. I was utterly burned out, thrice over. I was agitated all the time and I could literally see the stress on my own face. My wife referred to me as “Grumpy” for months on end. That was my new name. It was as hard on her as it was for me, perhaps moreso. I hadn’t read a sci-ﬁ book in probably a good 12 months. I felt like I was losing myself, and losing the moments I had with my family because my mind was so overloaded and preoccupied with things that ultimately didn’t f****** matter.
I couldn’t focus on the present. But I so wanted to, but my brain was fried.
At the time, my side project, now a side business, was making about half of what my senior software engineer position at $CRYPTO_EXCHANGE was netting me, before taxes. After many, many internal discussions, I was going to do it —
I was going to tell my wife that I wanted to quit my job.
I was scared to death that she would say that it was a bad idea. We had just had our ﬁrst baby, so this seemed like abysmal timing (if there was ever such a thing as good timing). We would be sacriﬁcing a lot ﬁnacially, and losing any stability we had.
I thought to myself, I’d preface it with the fact that I’m at my breaking point — that I can’t do 2 jobs anymore. I was going to say that I either need to quit my job and go full-time on my side business, or sell it. I can’t do both jobs.
One thing to note about me is that it’s incredibly hard for me to “open up.” Sometimes I don’t even know how to put into words the way that I feel, sometimes I don’t want to hear an obvious answer. Either way, it’s hard for me. (Even now, I’m discovering some of these feelings as I’m writing this post.)
My heart was pounding. I literally put years of my life into this and I’m sitting here ready to give it up if that’s what she thinks is best for us.
I couldn’t tell her. I didn’t tell her.
This happened a few times. My heart pounding, I couldn’t tell her, I didn’t tell her. It was a bad idea, I thought. Maybe I’m just going through another rough patch and the burnout will lessen given time. (It didn’t.) So, I kept going. Until one day, after venting about a manager and having to work late at the day job, and being on overload because I was up late the night before dealing with an outage for my own business, she asked me what I wanted to do. She prefaced it by saying she supported me.
My heart was pounding out of my chest. And I told her. I told her exactly how I felt, what our options were, and what I wanted to do. I told her that we had savings, and that if it didn’t grow like I thought it would, that I would sell it.
Without hesitating, she told me to do it. “Do what you need to do”, “I’m here for you”, “we can make it work”, “you don’t need to sell it”, “quit your job.”
The next day, I put in my letter of resignation.
A weight was lifted off my shoulders.
The years leading up to that moment were challenging. The years after, challenging. The years are still challenging, but a challenge is good.
There were times, and still are times, that I felt like an impostor. I’ve been working on the business for 5 years and I still don’t have a go-to growth channel. I can’t tell you what my customer acquisition costs are. I’m still trying new marketing strategies like I was years back, albeit with a more conservative budget since the stakes are higher now. I still have little idea as to what I’m doing. But I make my customers happy, and I provide for my family, and so I think I’m doing a pretty good job.
My goal was to be self-sustainable. This year, I think I’ve met my goal of being self-sustaining. I mean, I look forward to more growth so that we can rebuild our savings, but we’ve survived for almost 2 years now, and that’s all I asked for back in 2019.
The business is doing well, growth is pretty good, customers are happy. I just recently closed our ﬁrst deal with an F500, helping them replace their legacy licensing system with us. (I’ve seen that same scenario a few times this year, probably because enterprise contracts are coming up for renewal.)
Savings are a little less than they were, but I’m happy and my family is happy.
I smoked from the age of 13 to the age of 33. I loved smoking. I loved having something to do with my hands. I loved making friends by cadging — or sharing — cigarettes. I loved learning Zippo tricks, ﬁnding beautiful old cigarette cases at ﬂea markets, learning to roll a cigarette, then learning how to do it one-handed. I loved the excuse to take breaks…
Every few months, a prominent person or publication points out that McDonalds workers in Denmark receive $22 per hour, 6 weeks of vacation, and sick pay. This compensation comes on top of the general slate of social beneﬁts in Denmark, which includes child allowances, health care, child care, paid leave, retirement, and education through college, among other things.
In these discussions, relatively little is said about how this all came to be. This is sad because it’s a good story and because the story provides a good window into why Nordic labor markets are the way they are.
McDonalds opened its ﬁrst store in Denmark in 1981. At that point, it was operating in over 20 countries and had successfully avoided unions in all but one, Sweden.
When McDonalds arrived in Denmark, the labor market was governed by a set of sectoral labor agreements that established the wages and conditions for all the workers in a given sector. Under the prevailing norms, McDonalds should have adhered to the hotel and restaurant union agreement. But they didn’t have to do so, legally speaking. The union agreement is not binding on sector employers in the same way that a contract is. You can’t sue a company for ignoring it. It is strictly “voluntary.”
McDonalds decided not to follow the union agreement and thus set up its own pay levels and work rules instead. This was a departure, not just from what Danish companies did, but even from what other similar foreign companies did. For example, Burger King, which is identical to McDonalds in all relevant respects, decided to follow the union agreement when it came to Denmark a few years earlier.
Naturally, this decision from McDonalds drew the attention of the Danish labor movement. According to the press reports, the struggle to get McDonalds to follow the hotel and restaurant workers agreement began in 1982, but the efforts were very slow at ﬁrst. McDonalds maintained that it had a principled position against unions and negotiations and press overtures were unable to move them off that position.
In late 1988 and early 1989, the unions decided enough was enough and called sympathy strikes in adjacent industries in order to cripple McDonalds operations. Sixteen different sector unions participated in the sympathy strikes.
Dockworkers refused to unload containers that had McDonalds equipment in them. Printers refused to supply printed materials to the stores, such as menus and cups. Construction workers refused to build McDonalds stores and even stopped construction on a store that was already in progress but not yet complete. The typographers union refused to place McDonalds advertisements in publications, which eliminated the company’s print advertisement presence. Truckers refused to deliver food and beer to McDonalds. Food and beverage workers that worked at facilities that prepared food for the stores refused to work on McDonalds products.
In addition to wreaking havoc on McDonalds supply chains, the unions engaged in picketing and leaﬂet campaigns in front of McDonalds locations, urging consumers to boycott the company.
Once the sympathy strikes got going, McDonalds folded pretty quickly and decided to start following the hotel and restaurant agreement in 1989.
This is why McDonalds workers in Denmark are paid $22 per hour.
I bring this up because people say a lot of things about the economies of the Nordic countries and why they are so much more equal than ours. In this discussion, certainly the presence of unions and sector bargaining comes up, but rarely do you get a discussion of just how radically powerful and organized the Nordic unions are and have been. If you didn’t know better, you’d think the Nordic labor market is the way it is because all of the employers and workers came together and agreed that their system is better for everyone. And while it’s true of course that, on a day-to-day basis, labor relations in the countries are peaceful, lurking behind that peace is often a credible threat that the unions will crush an employer that steps out of line, not just by striking at one site or at one company, but by striking every single thing that the company touches.
We saw this most recently in Finland in 2019 when the state-owned postal service decided to cut the pay of 700 package handlers by moving them to a different sector agreement than the one they were currently being paid under. The unions responded by striking airlines, ferries, buses, trains, and ports. In the aftermath of these strikes, the pay cuts were reversed and the prime minister of the country resigned.
When I bring this up, people sometimes respond by saying that these kinds of strikes are illegal in the US. This is a true and worthwhile bit of information, but insofar as it is meant to imply that the different legal environment is what accounts for the labor radicalism, this obviously has things backwards. The laws aren’t driving the labor radicalism, but rather the labor radicalism is driving the laws.
We can see this clearly in another recent example, this time from Finland in 2018. There, the conservative government was preparing to pass a law that would make it easier for employers with 20 or fewer employees to ﬁre workers. The stated purpose of this was to stimulate hiring by making it easier to ﬁre and thus less risky to hire — the usual stuff.
The Finnish labor movement did not like this idea and called a massive political strike that sidelined workers in a bunch of different sectors. In response to the strike wave, the government changed the bill so it only applied to employers with 10 or fewer employees. The strikes continued and they changed the bill again, this time so it just stated generally that courts should consider an employer’s size when adjudicating wrongful dismissal cases. This was acceptable to the unions since, according to them at least, Finnish courts already do this and so the bill was basically moot. So they stopped striking.
One can only imagine what would happen if the Finnish government tried to ban sympathy strikes in the same way the US government has here.
If we are ever going to get to Nordic levels of equality, it is really hard to imagine doing it without building a similarly powerful labor movement. You can certainly get some of the way there, such as by copying certain welfare programs, but without the unions, you’ll always be missing a key piece. And while legal and policy reforms can help build the labor movement some, the power of organized labor is not ultimately rooted in the state, but rather in the ability to halt production and wreak havoc even when the state is aligned against it.
McDonalds doesn’t pay Danes high wages because of a statutory wage ﬂoor or even because the state stepped in to enforce a collective bargaining agreement. They pay high wages because back in the 1980s, Danish unions ﬂipped a switch and turned the whole business off, and McDonalds doesn’t want to ﬁnd out whether they would do it again.
This is where we need to get to.
They just don’t make computer magazines like they used to. The computer mags of the 1970s, 80s, and 90s had style. Attitude. There was something truly special about them.
Quite possibly the greatest among them, being BYTE (technically it was just “Byte”, but they always drew it as uppercase on the cover). Starting in 1975, that beautiful publication — which focused on “micro computers” — ran all the way until 1998.
The content was astounding. If you were a computer nerd, you knew BYTE. You loved BYTE. You devoured every single article
But, as I look back on BYTE magazine, it is the cover artwork that stands out to me.
Especially those by the amazing Robert Tinney.
There simply has not been computer magazine covers like this since. I’m collecting a few of my favorites here to help give all of you a taste of what 1970s and 1980s computer magazine art was like.
It was… beautiful. In large part thanks to Robert Tinney.
If you’d like to dig in a little deeper, I highly recommend reading a 2006 interview with Tinney over on VintageComputing.com.