10 interesting stories served every morning and every evening.

1 1,597 shares, 141 trendiness, words and minutes reading time

Just Be Rich ūü§∑‚Äć‚ôāÔłŹ

No one wants to be the bad guy.

When nar¬≠ra¬≠tives be¬≠gin to shift and the once good guys are la¬≠belled as bad it‚Äôs not sur¬≠pris¬≠ing they Ô¨Āght back. They‚Äôll point to crit¬≠i¬≠cisms as ex¬≠ag¬≠ger¬≠a¬≠tions. Their faults as mis¬≠un¬≠der¬≠stand¬≠ings.

Today’s freshly or­dained bad guys are the in­vestors and CEOs of Silicon Valley.

Once cham¬≠pi¬≠oned as the Ô¨āag¬≠bear¬≠ers of in¬≠no¬≠va¬≠tion and de¬≠moc¬≠ra¬≠ti¬≠za¬≠tion, now they‚Äôre viewed as new ver¬≠sions of the mo¬≠nop¬≠o¬≠lies of old and they‚Äôre Ô¨Āght¬≠ing back.

The ti­tle of Paul Graham’s es­say, How People Get Rich Now, did­n’t pre­pare me for the real goal of his words. It’s less a tu­to­r­ial or analy­sis and more a thinly veiled at­tempt to ease con­cerns about wealth in­equal­ity.

What he fails to men­tion is that con­cerns about wealth in­equal­ity aren’t con­cerned with how wealth was gen­er­ated but rather the grow­ing wealth gap that has ac­cel­er­ated in re­cent decades. Tech has made star­tups both cheaper and eas­ier but only for a small per­cent­age of peo­ple. And when a se­lect group of peo­ple have an ad­van­tage that oth­ers don’t it’s com­pounded over time.

Paul paints a rosy pic¬≠ture but does¬≠n‚Äôt men¬≠tion that in¬≠comes for lower and mid¬≠dle-class fam¬≠i¬≠lies have fallen since the 80s. This golden age of en¬≠tre¬≠pre¬≠neur¬≠ship has¬≠n‚Äôt ben¬≠e¬≠Ô¨Āt¬≠ted the vast ma¬≠jor¬≠ity of peo¬≠ple and the in¬≠crease in the Gini co¬≠ef¬≠Ô¨Ā¬≠cient is¬≠n‚Äôt sim¬≠ply that more com¬≠pa¬≠nies are be¬≠ing started. The rich are get¬≠ting richer and the poor are get¬≠ting poorer.

And there we have it. The slight in­jec­tion of his true ide­ol­ogy rel­e­gated to the notes sec­tion and vague enough that some might ig­nore. But keep in mind this is the same guy who ar­gued against a wealth tax. His seem­ingly im­par­tial and log­i­cal writ­ing at­tempts to hide his true in­ten­tions.

Is this re­ally about how peo­ple get rich or why we should all be happy that peo­ple like PG are get­ting richer while tons of peo­ple and strug­gling to meet their ba­sic needs. Wealth in­equal­ity is just a rad­i­cal left fairy tale to vil­lainize the hard-work­ing 1%. We could all be rich too, it’s so much eas­ier now. Just pull your­self up by your boot­straps.

There’s no ques­tion that it’s eas­ier now than ever to start a new busi­ness and reach your mar­ket. The in­ter­net has had a de­moc­ra­tiz­ing ef­fect in this re­gard. But it’s also ob­vi­ous to any­one out­side the SV bub­ble that it’s still only ac­ces­si­ble to a small mi­nor­ity of peo­ple. Most peo­ple don’t have the safety net or men­tal band­width to even con­sider en­tre­pre­neur­ship. It is not a panacea for the masses.

But to use that fact to push the false claim that wealth in­equal­ity is solely due to more star­tups and not a real prob­lem says a lot. This es­say is less about how peo­ple get rich and more about why it’s okay that peo­ple like PG are get­ting rich. They’re bet­ter than the rich­est peo­ple of 1960. And we can join them. We just need to stop com­plain­ing and just be rich in­stead.


Read the original on keenen.xyz ¬Ľ

2 730 shares, 26 trendiness, words and minutes reading time

How People Get Rich Now

April 2021

Every year since 1982, Forbes mag­a­zine has pub­lished a list of the

rich­est Americans. If we com­pare the 100 rich­est peo­ple in 1982 to

the 100 rich­est in 2020, we no­tice some big dif­fer­ences.

In 1982 the most com­mon source of wealth was in­her­i­tance. Of the

100 rich­est peo­ple, 60 in­her­ited from an an­ces­tor. There were 10

du Pont heirs alone. By 2020 the num­ber of heirs had been cut in

half, ac­count­ing for only 27 of the biggest 100 for­tunes.

Why would the per­cent­age of heirs de­crease? Not be­cause in­her­i­tance

taxes in­creased. In fact, they de­creased sig­nif­i­cantly dur­ing this

pe­riod. The rea­son the per­cent­age of heirs has de­creased is not

that fewer peo­ple are in­her­it­ing great for­tunes, but that more

peo­ple are mak­ing them.

How are peo­ple mak­ing these new for­tunes? Roughly 3/4 by start­ing

com­pa­nies and 1/4 by in­vest­ing. Of the 73 new for­tunes in 2020, 56

de­rive from founders’ or early em­ploy­ees’ eq­uity (52 founders, 2

early em­ploy­ees, and 2 wives of founders), and 17 from man­ag­ing

in­vest­ment funds.

There were no fund man­agers among the 100 rich­est Americans in 1982.

Hedge funds and pri¬≠vate eq¬≠uity Ô¨Ārms ex¬≠isted in 1982, but none of

their founders were rich enough yet to make it into the top 100.

Two things changed: fund man­agers dis­cov­ered new ways to gen­er­ate

high re­turns, and more in­vestors were will­ing to trust them with

their money.

But the main source of new for­tunes now is start­ing com­pa­nies, and

when you look at the data, you see big changes there too. People

get richer from start­ing com­pa­nies now than they did in 1982, be­cause

the com­pa­nies do dif­fer­ent things.

In 1982, there were two dom­i­nant sources of new wealth: oil and

real es­tate. Of the 40 new for­tunes in 1982, at least 24 were due

pri­mar­ily to oil or real es­tate. Now only a small num­ber are: of

the 73 new for­tunes in 2020, 4 were due to real es­tate and only 2

to oil.

By 2020 the biggest source of new wealth was what are some­times

called ‚Äútech‚ÄĚ com¬≠pa¬≠nies. Of the 73 new for¬≠tunes, about 30 de¬≠rive

from such com­pa­nies. These are par­tic­u­larly com­mon among the rich­est

of the rich: 8 of the top 10 for­tunes in 2020 were new for­tunes of

this type.

Arguably it’s slightly mis­lead­ing to treat tech as a cat­e­gory.

Isn’t Amazon re­ally a re­tailer, and Tesla a car maker? Yes and no.

Maybe in 50 years, when what we call tech is taken for granted, it

won’t seem right to put these two busi­nesses in the same cat­e­gory.

But at the mo­ment at least, there is def­i­nitely some­thing they share

in com­mon that dis­tin­guishes them. What re­tailer starts AWS? What

car maker is run by some­one who also has a rocket com­pany?

The tech com­pa­nies be­hind the top 100 for­tunes also form a

well-dif­fer­en­ti­ated group in the sense that they’re all com­pa­nies

that ven­ture cap­i­tal­ists would read­ily in­vest in, and the oth­ers

mostly not. And there’s a rea­son why: these are mostly com­pa­nies

that win by hav­ing bet­ter tech­nol­ogy, rather than just a CEO who’s

re­ally dri­ven and good at mak­ing deals.

To that ex­tent, the rise of the tech com­pa­nies rep­re­sents a qual­i­ta­tive

change. The oil and real es­tate mag­nates of the 1982 Forbes 400

did­n’t win by mak­ing bet­ter tech­nol­ogy. They won by be­ing re­ally

dri­ven and good at mak­ing deals.

And in­deed, that way of

get­ting rich is so old that it pre­dates the Industrial Revolution.

The courtiers who got rich in the (nominal) ser­vice of European

royal houses in the 16th and 17th cen­turies were also, as a rule,

re­ally dri­ven and good at mak­ing deals.

People who don‚Äôt look any deeper than the Gini co¬≠ef¬≠Ô¨Ā¬≠cient look

back on the world of 1982 as the good old days, be­cause those who

got rich then did­n’t get as rich. But if you dig into how they

got rich, the old days don’t look so good. In 1982, 84% of the

rich­est 100 peo­ple got rich by in­her­i­tance, ex­tract­ing nat­ural

re­sources, or do­ing real es­tate deals. Is that re­ally bet­ter than

a world in which the rich­est peo­ple get rich by start­ing tech


Why are peo­ple start­ing so many more new com­pa­nies than they used

to, and why are they get­ting so rich from it? The an­swer to the

Ô¨Ārst ques¬≠tion, cu¬≠ri¬≠ously enough, is that it‚Äôs mis¬≠phrased. We

should­n’t be ask­ing why peo­ple are start­ing com­pa­nies, but why

they’re start­ing com­pa­nies again.

In 1892, the New York Herald Tribune com­piled a list of all the

mil­lion­aires in America. They found 4047 of them. How many had

in¬≠her¬≠ited their wealth then? Only about 20% ‚ÄĒ less than the

pro­por­tion of heirs to­day. And when you in­ves­ti­gate the sources of

the new for­tunes, 1892 looks even more like to­day. Hugh Rockoff

found that “many of the rich­est … gained their ini­tial edge from

the new tech¬≠nol¬≠ogy of mass pro¬≠duc¬≠tion.‚ÄĚ

So it’s not 2020 that’s the anom­aly here, but 1982. The real ques­tion

is why so few peo­ple had got­ten rich from start­ing com­pa­nies in

1982. And the an­swer is that even as the Herald Tribune’s list was

be­ing com­piled, a wave of con­sol­i­da­tion

was sweep­ing through the

American econ­omy. In the late 19th and early 20th cen­turies,

Ô¨Ā¬≠nanciers like J. P. Morgan com¬≠bined thou¬≠sands of smaller com¬≠pa¬≠nies

into a few hun­dred gi­ant ones with com­mand­ing economies of scale.

By the end of World War II, as Michael Lind writes, “the ma­jor

sec­tors of the econ­omy were ei­ther or­ga­nized as gov­ern­ment-backed

car¬≠tels or dom¬≠i¬≠nated by a few oli¬≠gop¬≠o¬≠lis¬≠tic cor¬≠po¬≠ra¬≠tions.‚ÄĚ

In 1960, most of the peo­ple who start star­tups to­day would have

gone to work for one of them. You could get rich from start­ing your

own com­pany in 1890 and in 2020, but in 1960 it was not re­ally a

vi­able op­tion. You could­n’t break through the oli­gop­o­lies to get

at the mar­kets. So the pres­ti­gious route in 1960 was not to start

your own com­pany, but to work your way up the cor­po­rate lad­der at

an ex­ist­ing one.

Making every­one a cor­po­rate em­ployee de­creased eco­nomic in­equal­ity

(and every other kind of vari­a­tion), but if your model of nor­mal


Read the original on paulgraham.com ¬Ľ

3 657 shares, 26 trendiness, words and minutes reading time

Introducing OpenSearch

Today, we are in­tro­duc­ing the OpenSearch pro­ject, a com­mu­nity-dri­ven, open source fork of Elasticsearch and Kibana. We are mak­ing a long-term in­vest­ment in OpenSearch to en­sure users con­tinue to have a se­cure, high-qual­ity, fully open source search and an­a­lyt­ics suite with a rich roadmap of new and in­no­v­a­tive func­tion­al­ity. This pro­ject in­cludes OpenSearch (derived from Elasticsearch 7.10.2) and OpenSearch Dashboards (derived from Kibana 7.10.2). Additionally, the OpenSearch pro­ject is the new home for our pre­vi­ous dis­tri­b­u­tion of Elasticsearch (Open Distro for Elasticsearch), which in­cludes fea­tures such as en­ter­prise se­cu­rity, alert­ing, ma­chine learn­ing, SQL, in­dex state man­age­ment, and more. All of the soft­ware in the OpenSearch pro­ject is re­leased un­der the Apache License, Version 2.0 (ALv2). We in­vite you to check out the code for OpenSearch and OpenSearch Dashboards on GitHub, and join us and the grow­ing com­mu­nity around this ef­fort.

We wel¬≠come in¬≠di¬≠vid¬≠u¬≠als and or¬≠ga¬≠ni¬≠za¬≠tions who are users of Elasticsearch, as well as those who are build¬≠ing prod¬≠ucts and ser¬≠vices based on Elasticsearch. Our goal with the OpenSearch pro¬≠ject is to make it easy for as many peo¬≠ple and or¬≠ga¬≠ni¬≠za¬≠tions as pos¬≠si¬≠ble to use OpenSearch in their busi¬≠ness, their prod¬≠ucts, and their pro¬≠jects. Whether you are an in¬≠de¬≠pen¬≠dent de¬≠vel¬≠oper, an en¬≠ter¬≠prise IT de¬≠part¬≠ment, a soft¬≠ware ven¬≠dor, or a man¬≠aged ser¬≠vice provider, the ALv2 li¬≠cense grants you well-un¬≠der¬≠stood us¬≠age rights for OpenSearch. You can use, mod¬≠ify, ex¬≠tend, em¬≠bed, mon¬≠e¬≠tize, re¬≠sell, and of¬≠fer OpenSearch as part of your prod¬≠ucts and ser¬≠vices. We have also pub¬≠lished per¬≠mis¬≠sive us¬≠age guide¬≠lines for the OpenSearch trade¬≠mark, so you can use the name to pro¬≠mote your of¬≠fer¬≠ings. Broad adop¬≠tion ben¬≠e¬≠Ô¨Āts all mem¬≠bers of the com¬≠mu¬≠nity.

We plan to re¬≠name our ex¬≠ist¬≠ing Amazon Elasticsearch Service to Amazon OpenSearch Service. Aside from the name change, cus¬≠tomers can rest as¬≠sured that we will con¬≠tinue to de¬≠liver the same great ex¬≠pe¬≠ri¬≠ence with¬≠out any im¬≠pact to on¬≠go¬≠ing op¬≠er¬≠a¬≠tions, de¬≠vel¬≠op¬≠ment method¬≠ol¬≠ogy, or busi¬≠ness use. Amazon OpenSearch Service will of¬≠fer a choice of open source en¬≠gines to de¬≠ploy and run, in¬≠clud¬≠ing the cur¬≠rently avail¬≠able 19 ver¬≠sions of ALv2 Elasticsearch (7.9 and ear¬≠lier, with 7.10 com¬≠ing soon) as well as new ver¬≠sions of OpenSearch. We will con¬≠tinue to sup¬≠port and main¬≠tain the ALv2 Elasticsearch ver¬≠sions with se¬≠cu¬≠rity and bug Ô¨Āxes, and we will de¬≠liver all new fea¬≠tures and func¬≠tion¬≠al¬≠ity through OpenSearch and OpenSearch Dashboards. The Amazon OpenSearch Service APIs will be back¬≠ward com¬≠pat¬≠i¬≠ble with the ex¬≠ist¬≠ing ser¬≠vice APIs to elim¬≠i¬≠nate any need for cus¬≠tomers to up¬≠date their cur¬≠rent client code or ap¬≠pli¬≠ca¬≠tions. Additionally, just as we did for pre¬≠vi¬≠ous ver¬≠sions of Elasticsearch, we will pro¬≠vide a seam¬≠less up¬≠grade path from ex¬≠ist¬≠ing Elasticsearch 6.x and 7.x man¬≠aged clus¬≠ters to OpenSearch.

We are not alone in our com­mit­ment to OpenSearch. Organizations as di­verse as Red Hat, SAP, Capital One, and Logz.io have joined us in sup­port.

‚ÄúAt Red Hat, we be¬≠lieve in the power of open source, and that com¬≠mu¬≠nity col¬≠lab¬≠o¬≠ra¬≠tion is the best way to build soft¬≠ware,‚ÄĚ said Deborah Bryant, Senior Director, Open Source Program OfÔ¨Āce, Red Hat. ‚ÄúWe ap¬≠pre¬≠ci¬≠ate Amazon‚Äôs com¬≠mit¬≠ment to OpenSearch be¬≠ing open and we are ex¬≠cited to see con¬≠tin¬≠ued sup¬≠port for open source at Amazon.‚ÄĚ

‚ÄúSAP cus¬≠tomers ex¬≠pect a uni¬≠Ô¨Āed, busi¬≠ness-cen¬≠tric and open SAP Business Technology Platform,‚ÄĚ said Jan Schaffner, SVP and Head of BTP Foundational Plane. ‚ÄúOur ob¬≠serv¬≠abil¬≠ity strat¬≠egy uses Elasticsearch as a ma¬≠jor en¬≠abler. OpenSearch pro¬≠vides a true open source path and com¬≠mu¬≠nity-dri¬≠ven ap¬≠proach to move this for¬≠ward.‚ÄĚ

‚ÄúAt¬†Capital¬†One, we take an open source-Ô¨Ārst ap¬≠proach to soft¬≠ware de¬≠vel¬≠op¬≠ment, and have seen that we‚Äôre able to in¬≠no¬≠vate more quickly by lever¬≠ag¬≠ing the tal¬≠ents of de¬≠vel¬≠oper com¬≠mu¬≠ni¬≠ties world¬≠wide,‚ÄĚ said Nureen D‚ÄôSouza, Sr. Manager for¬†Cap¬≠i¬≠tal¬†One‚Äôs Open Source Program OfÔ¨Āce. ‚ÄúWhen our teams chose to use Elasticsearch, the free¬≠doms pro¬≠vided by the Apache-v2.0 li¬≠cense was cen¬≠tral to that choice. We‚Äôre very sup¬≠port¬≠ive of the OpenSearch pro¬≠ject, as it will give us greater con¬≠trol and au¬≠ton¬≠omy over our data plat¬≠form choices while re¬≠tain¬≠ing the free¬≠dom¬†af¬≠forded by an open source li¬≠cense.‚ÄĚ

‚ÄúAt Logz.io we have a deep be¬≠lief that com¬≠mu¬≠nity dri¬≠ven open source is an en¬≠abler for in¬≠no¬≠va¬≠tion and pros¬≠per¬≠ity,‚ÄĚ said Tomer Levy, co-founder and CEO of Logz.io. ‚ÄúWe have the high¬≠est com¬≠mit¬≠ment to our cus¬≠tomers and the com¬≠mu¬≠nity that re¬≠lies on open source to en¬≠sure that OpenSearch is avail¬≠able, thriv¬≠ing, and has a strong path for¬≠ward for the com¬≠mu¬≠nity and led by the com¬≠mu¬≠nity. We have made a com¬≠mit¬≠ment to work with AWS and other mem¬≠bers of the com¬≠mu¬≠nity to in¬≠no¬≠vate and en¬≠able every or¬≠ga¬≠ni¬≠za¬≠tion around the world to en¬≠joy the ben¬≠e¬≠Ô¨Āts of these crit¬≠i¬≠cal open source pro¬≠jects.‚ÄĚ

We are truly ex¬≠cited about the po¬≠ten¬≠tial for OpenSearch to be a com¬≠mu¬≠nity en¬≠deavor, where any¬≠one can con¬≠tribute to it, in¬≠Ô¨āu¬≠ence it, and make de¬≠ci¬≠sions to¬≠gether about its fu¬≠ture. Community de¬≠vel¬≠op¬≠ment, at its best, lets peo¬≠ple with di¬≠verse in¬≠ter¬≠ests have a di¬≠rect hand in guid¬≠ing and build¬≠ing prod¬≠ucts they will use; this re¬≠sults in prod¬≠ucts that meet their needs bet¬≠ter than any¬≠thing else. It seems we aren‚Äôt alone in this in¬≠ter¬≠est; there‚Äôs been an out¬≠pour¬≠ing of ex¬≠cite¬≠ment from the com¬≠mu¬≠nity to drive OpenSearch, and ques¬≠tions about how we plan to work to¬≠gether.

We’ve taken a num­ber of steps to make it easy to col­lab­o­rate on OpenSearch’s de­vel­op­ment. The en­tire code base is un­der the Apache 2.0 li­cense, and we don’t ask for a con­trib­u­tor li­cense agree­ment (CLA). This makes it easy for any­one to con­tribute. We’re also keep­ing the code base well-struc­tured and mod­u­lar, so every­one can eas­ily mod­ify and ex­tend it for their own uses.

Amazon is the pri¬≠mary stew¬≠ard and main¬≠tainer of OpenSearch to¬≠day, and we have pro¬≠posed guid¬≠ing prin¬≠ci¬≠ples for de¬≠vel¬≠op¬≠ment that make it clear that any¬≠one can be a val¬≠ued stake¬≠holder in the pro¬≠ject. We in¬≠vite every¬≠one to pro¬≠vide feed¬≠back and start con¬≠tribut¬≠ing to OpenSearch. As we work to¬≠gether in the open, we ex¬≠pect to un¬≠cover the best ways to col¬≠lab¬≠o¬≠rate and em¬≠power all in¬≠ter¬≠ested stake¬≠hold¬≠ers to share in de¬≠ci¬≠sion mak¬≠ing. Cultivating the right gov¬≠er¬≠nance ap¬≠proach for an open source pro¬≠ject re¬≠quires thought¬≠ful de¬≠lib¬≠er¬≠a¬≠tion with the com¬≠mu¬≠nity. We‚Äôre con¬≠Ô¨Ā¬≠dent that we can Ô¨Ānd the best ap¬≠proach to¬≠gether over time.

Getting OpenSearch to this point re¬≠quired sub¬≠stan¬≠tial work to re¬≠move Elastic com¬≠mer¬≠cial li¬≠censed fea¬≠tures, code, and brand¬≠ing. The OpenSearch re¬≠pos we made avail¬≠able to¬≠day are a foun¬≠da¬≠tion on which every¬≠one can build and in¬≠no¬≠vate. You should con¬≠sider the ini¬≠tial code to be at an al¬≠pha stage ‚ÄĒ it is not com¬≠plete, not thor¬≠oughly tested, and not suit¬≠able for pro¬≠duc¬≠tion use. We are plan¬≠ning to re¬≠lease a beta in the next few weeks, and ex¬≠pect it to sta¬≠bi¬≠lize and be ready for pro¬≠duc¬≠tion by early sum¬≠mer (mid-2021).

The code base is ready, how­ever, for your con­tri­bu­tions, feed­back, and par­tic­i­pa­tion. To get go­ing with the re­pos, grab the source from GitHub and build it your­self:

Once you’ve cloned the re­pos, see what you can do. These re­pos are un­der ac­tive con­struc­tion, so what works or does­n’t work will change from mo­ment to mo­ment. Some tasks you can do to help in­clude:

* See what you can get run­ning in your en­vi­ron­ment.

* Debug any is¬≠sues you do Ô¨Ānd and sub¬≠mit PRs.

* Take a look at the con­tribut­ing guides (OpenSearch, OpenSearch Dashboards) and de­vel­oper guides (OpenSearch, OpenSearch Dashboards) to make sure they are clear and un­der­stand­able to you.

Once you have OpenSearch and OpenSearch Dashboards run­ning:

* Test any cus­tom plu­g­ins or code you use and re­port what breaks.

* Run a sam­ple work­load and get in touch if it be­haves dif­fer­ently from your pre­vi­ous setup.

* Connect it to any ex¬≠ter¬≠nal tools‚Ää/‚Ääli¬≠braries and Ô¨Ānd out what works as ex¬≠pected.

We en¬≠cour¬≠age every¬≠body to en¬≠gage with the OpenSearch com¬≠mu¬≠nity. We have launched a com¬≠mu¬≠nity site at opensearch.org. Our fo¬≠rums are where we col¬≠lab¬≠o¬≠rate and make de¬≠ci¬≠sions. We wel¬≠come pull re¬≠quests through GitHub to Ô¨Āx bugs, im¬≠prove per¬≠for¬≠mance and sta¬≠bil¬≠ity, or add new fea¬≠tures. Keep an eye out for ‚Äúhelp-wanted‚ÄĚ tags on is¬≠sues.

We’re so thrilled to have you along with us on this jour­ney, and we can’t wait to see where it leads. We look for­ward to be­ing part of a grow­ing com­mu­nity that dri­ves OpenSearch to be­come soft­ware that every­one wants to in­no­vate on and use.


Read the original on aws.amazon.com ¬Ľ

4 379 shares, 22 trendiness, words and minutes reading time

Add chrome 0day · r4j0x00/exploits@7ba55e5

Sign in

Sign up

Sign up


This com­mit does not be­long to any branch on this repos­i­tory, and may be­long to a fork out­side of the repos­i­tory.

You can’t per­form that ac­tion at this time.

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.


Read the original on github.com ¬Ľ

5 320 shares, 23 trendiness, words and minutes reading time


The repos­i­tory is put into the merge state. The

MERGE_HEAD Ô¨Āle is writ¬≠ten and its con¬≠tents set to

giver¬≠Hash. The MERGE_MSG Ô¨Āle is writ¬≠ten and its con¬≠tents set to a boil¬≠er¬≠plate merge com¬≠mit mes¬≠sage. A merge diff is cre¬≠ated that will turn the con¬≠tents of re¬≠ceiver into the con¬≠tents of giver. This con¬≠tains the path of every Ô¨Āle that is dif¬≠fer¬≠ent and whether it was added, re¬≠moved or mod¬≠i¬≠Ô¨Āed, or is in con¬≠Ô¨āict. Added Ô¨Āles are added to the in¬≠dex and work¬≠ing copy. Removed Ô¨Āles are re¬≠moved from the in¬≠dex and work¬≠ing copy. ModiÔ¨Āed Ô¨Āles are mod¬≠i¬≠Ô¨Āed in the in¬≠dex and work¬≠ing copy. Files that are in con¬≠Ô¨āict are writ¬≠ten to the work¬≠ing copy to in¬≠clude the re¬≠ceiver and giver ver¬≠sions. Both the re¬≠ceiver and giver ver¬≠sions are writ¬≠ten to the in¬≠dex.


Read the original on gitlet.maryrosecook.com ¬Ľ

6 313 shares, 11 trendiness, words and minutes reading time

A High-Performance Arm Server CPU For Use In Big AI Systems

NVIDIA Unveils Grace: A High-Performance Arm Server CPU For Use In Big AI Systems

Kicking off an¬≠other busy Spring GPU Technology Conference for NVIDIA, this morn¬≠ing the graph¬≠ics and ac¬≠cel¬≠er¬≠a¬≠tor de¬≠signer is an¬≠nounc¬≠ing that they are go¬≠ing to once again de¬≠sign their own Arm-based CPU/SoC. Dubbed Grace — af¬≠ter Grace Hopper, the com¬≠puter pro¬≠gram¬≠ming pi¬≠o¬≠neer and US Navy rear ad¬≠mi¬≠ral — the CPU is NVIDIA‚Äôs lat¬≠est stab at more fully ver¬≠ti¬≠cally in¬≠te¬≠grat¬≠ing their hard¬≠ware stack by be¬≠ing able to of¬≠fer a high-per¬≠for¬≠mance CPU along¬≠side their reg¬≠u¬≠lar GPU wares. According to NVIDIA, the chip is be¬≠ing de¬≠signed specif¬≠i¬≠cally for large-scale neural net¬≠work work¬≠loads, and is ex¬≠pected to be¬≠come avail¬≠able in NVIDIA prod¬≠ucts in 2023.

With two years to go un¬≠til the chip is ready, NVIDIA is play¬≠ing things rel¬≠a¬≠tively coy at this time. The com¬≠pany is of¬≠fer¬≠ing only lim¬≠ited de¬≠tails for the chip — it will be based on a fu¬≠ture it¬≠er¬≠a¬≠tion of Arm‚Äôs Neoverse cores, for ex¬≠am¬≠ple — as to¬≠day‚Äôs an¬≠nounce¬≠ment is a bit more fo¬≠cused on NVIDIA‚Äôs fu¬≠ture work¬≠Ô¨āow model than it is speeds and feeds. If noth¬≠ing else, the com¬≠pany is mak¬≠ing it clear early on that, at least for now, Grace is an in¬≠ter¬≠nal prod¬≠uct for NVIDIA, to be of¬≠fered as part of their larger server of¬≠fer¬≠ings. The com¬≠pany is¬≠n‚Äôt di¬≠rectly gun¬≠ning for the Intel Xeon or AMD EPYC server mar¬≠ket, but in¬≠stead they are build¬≠ing their own chip to com¬≠ple¬≠ment their GPU of¬≠fer¬≠ings, cre¬≠at¬≠ing a spe¬≠cial¬≠ized chip that can di¬≠rectly con¬≠nect to their GPUs and help han¬≠dle enor¬≠mous, tril¬≠lion pa¬≠ra¬≠me¬≠ter AI mod¬≠els.

More broadly speak¬≠ing, Grace is de¬≠signed to Ô¨Āll the CPU-sized hole in NVIDIA‚Äôs AI server of¬≠fer¬≠ings. The com¬≠pa¬≠ny‚Äôs GPUs are in¬≠cred¬≠i¬≠bly well-suited for cer¬≠tain classes of deep learn¬≠ing work¬≠loads, but not all work¬≠loads are purely GPU-bound, if only be¬≠cause a CPU is needed to keep the GPUs fed. NVIDIA‚Äôs cur¬≠rent server of¬≠fer¬≠ings, in turn, typ¬≠i¬≠cally rely on AMD‚Äôs EPYC proces¬≠sors, which are very fast for gen¬≠eral com¬≠pute pur¬≠poses, but lack the kind of high-speed I/O and deep learn¬≠ing op¬≠ti¬≠miza¬≠tions that NVIDIA is look¬≠ing for. In par¬≠tic¬≠u¬≠lar, NVIDIA is cur¬≠rently bot¬≠tle¬≠necked by the use of PCI Express for CPU-GPU con¬≠nec¬≠tiv¬≠ity; their GPUs can talk quickly amongst them¬≠selves via NVLink, but not back to the host CPU or sys¬≠tem RAM.

The so­lu­tion to the prob­lem, as was the case even be­fore Grace, is to use NVLink for CPU-GPU com­mu­ni­ca­tions. Previously NVIDIA has worked with the OpenPOWER foun­da­tion to get NVLink into POWER9 for ex­actly this rea­son, how­ever that re­la­tion­ship is seem­ingly on its way out, both as POWER’s pop­u­lar­ity wanes and POWER10 is skip­ping NVLink. Instead, NVIDIA is go­ing their own way by build­ing an Arm server CPU with the nec­es­sary NVLink func­tion­al­ity.

The end re¬≠sult, ac¬≠cord¬≠ing to NVIDIA, will be a high-per¬≠for¬≠mance and high-band¬≠width CPU that is de¬≠signed to work in tan¬≠dem with a fu¬≠ture gen¬≠er¬≠a¬≠tion of NVIDIA server GPUs. With NVIDIA talk¬≠ing about pair¬≠ing each NVIDIA GPU with a Grace CPU on a sin¬≠gle board — sim¬≠i¬≠lar to to¬≠day‚Äôs mez¬≠za¬≠nine cards — not only does CPU per¬≠for¬≠mance and sys¬≠tem mem¬≠ory scale up with the num¬≠ber of GPUs, but in a round¬≠about way, Grace will serve as a co-proces¬≠sor of sorts to NVIDIA‚Äôs GPUs. This, if noth¬≠ing else, is a very NVIDIA so¬≠lu¬≠tion to the prob¬≠lem, not only im¬≠prov¬≠ing their per¬≠for¬≠mance, but giv¬≠ing them a counter should the more tra¬≠di¬≠tion¬≠ally in¬≠te¬≠grated AMD or Intel try some sort of sim¬≠i¬≠lar CPU+GPU fu¬≠sion play.

By 2023 NVIDIA will be up to NVLink 4, which will of¬≠fer at least 900GB/sec of cum¬≠mu¬≠la¬≠tive (up + down) band¬≠width be¬≠tween the SoC and GPU, and over 600GB/sec cum¬≠mu¬≠la¬≠tive be¬≠tween Grace SoCs. Critically, this is greater than the mem¬≠ory band¬≠width of the SoC, which means that NVIDIA‚Äôs GPUs will have a cache co¬≠her¬≠ent link to the CPU that can ac¬≠cess the sys¬≠tem mem¬≠ory at full band¬≠width, and also al¬≠low¬≠ing the en¬≠tire sys¬≠tem to have a sin¬≠gle shared mem¬≠ory ad¬≠dress space. NVIDIA de¬≠scribes this as bal¬≠anc¬≠ing the amount of band¬≠width avail¬≠able in a sys¬≠tem, and they‚Äôre not wrong, but there‚Äôs more to it. Having an on-pack¬≠age CPU is a ma¬≠jor means to¬≠wards in¬≠creas¬≠ing the amount of mem¬≠ory NVIDIA‚Äôs GPUs can ef¬≠fec¬≠tively ac¬≠cess and use, as mem¬≠ory ca¬≠pac¬≠ity con¬≠tin¬≠ues to be the pri¬≠mary con¬≠strain¬≠ing fac¬≠tors for large neural net¬≠works — you can only ef¬≠Ô¨Ā¬≠ciently run a net¬≠work as big as your lo¬≠cal mem¬≠ory pool.

And this mem¬≠ory-fo¬≠cused strat¬≠egy is re¬≠Ô¨āected in the mem¬≠ory pool de¬≠sign of Grace, as well. Since NVIDIA is putting the CPU on a shared pack¬≠age with the GPU, they‚Äôre go¬≠ing to put the RAM down right next to it. Grace-equipped GPU mod¬≠ules will in¬≠clude a to-be-de¬≠ter¬≠mined amount of LPDDR5x mem¬≠ory, with NVIDIA tar¬≠get¬≠ing at least 500GB/sec of mem¬≠ory band¬≠width. Besides be¬≠ing what‚Äôs likely to be the high¬≠est-band¬≠width non-graph¬≠ics mem¬≠ory op¬≠tion in 2023, NVIDIA is tout¬≠ing the use of LPDDR5x as a gain for en¬≠ergy ef¬≠Ô¨Ā¬≠ciency, ow¬≠ing to the tech¬≠nol¬≠o¬≠gy‚Äôs mo¬≠bile-fo¬≠cused roots and very short trace lengths. And, since this is a server part, Grace‚Äôs mem¬≠ory will be ECC-enabled, as well.

As for CPU per¬≠for¬≠mance, this is ac¬≠tu¬≠ally the part where NVIDIA has said the least. The com¬≠pany will be us¬≠ing a fu¬≠ture gen¬≠er¬≠a¬≠tion of Arm‚Äôs Neoverse CPU cores, where the ini¬≠tial N1 de¬≠sign has al¬≠ready been turn¬≠ing heads. But other than that, all the com¬≠pany is say¬≠ing is that the cores should break 300 points on the SPECrate2017_int_base through¬≠put bench¬≠mark, which would be com¬≠pa¬≠ra¬≠ble to some of AMD‚Äôs sec¬≠ond-gen¬≠er¬≠a¬≠tion 64 core EPYC CPUs. The com¬≠pany also is¬≠n‚Äôt say¬≠ing much about how the CPUs are con¬≠Ô¨Āg¬≠ured or what op¬≠ti¬≠miza¬≠tions are be¬≠ing added specif¬≠i¬≠cally for neural net¬≠work pro¬≠cess¬≠ing. But since Grace is meant to sup¬≠port NVIDIA‚Äôs GPUs, I would ex¬≠pect it to be stronger where GPUs in gen¬≠eral are weaker.

Otherwise, as men­tioned ear­lier, NVIDIA big vi­sion goal for Grace is sig­nif­i­cantly cut­ting down the time re­quired for the largest neural net­work­ing mod­els. NVIDIA is gun­ning for 10x higher per­for­mance on 1 tril­lion pa­ra­me­ter mod­els, and their per­for­mance pro­jec­tions for a 64 mod­ule Grace+A100 sys­tem (with the­o­ret­i­cal NVLink 4 sup­port) would be to bring down train­ing such a model from a month to three days. Or al­ter­na­tively, be­ing able to do real-time in­fer­ence on a 500 bil­lion pa­ra­me­ter model on an 8 mod­ule sys­tem.

Overall, this is NVIDIA‚Äôs sec¬≠ond real stab at the data cen¬≠ter CPU mar¬≠ket — and the Ô¨Ārst that is likely to suc¬≠ceed. NVIDIA‚Äôs Project Denver, which was orig¬≠i¬≠nally an¬≠nounced just over a decade ago, never re¬≠ally panned out as NVIDIA ex¬≠pected. The fam¬≠ily of cus¬≠tom Arm cores was never good enough, and never made it out of NVIDIA‚Äôs mo¬≠bile SoCs. Grace, in con¬≠trast, is a much safer pro¬≠ject for NVIDIA; they‚Äôre merely li¬≠cens¬≠ing Arm cores rather than build¬≠ing their own, and those cores will be in use by nu¬≠mer¬≠ous other par¬≠ties, as well. So NVIDIA‚Äôs risk is re¬≠duced to largely get¬≠ting the I/O and mem¬≠ory plumb¬≠ing right, as well as keep¬≠ing the Ô¨Ā¬≠nal de¬≠sign en¬≠ergy ef¬≠Ô¨Ā¬≠cient.

If all goes ac¬≠cord¬≠ing to plan, ex¬≠pect to see Grace in 2023. NVIDIA is al¬≠ready con¬≠Ô¨Ārm¬≠ing that Grace mod¬≠ules will be avail¬≠able for use in HGX car¬≠rier boards, and by ex¬≠ten¬≠sion DGX and all the other sys¬≠tems that use those boards. So while we haven‚Äôt seen the full ex¬≠tent of NVIDIA‚Äôs Grace plans, it‚Äôs clear that they are plan¬≠ning to make it a core part of fu¬≠ture server of¬≠fer¬≠ings.

And even though Grace is¬≠n‚Äôt ship¬≠ping un¬≠til 2023, NVIDIA has al¬≠ready lined up their Ô¨Ārst cus¬≠tomers for the hard¬≠ware — and they‚Äôre su¬≠per¬≠com¬≠puter cus¬≠tomers, no less. Both the Swiss National Supercomputing Centre (CSCS) and Los Alamos National Laboratory are an¬≠nounc¬≠ing to¬≠day that they‚Äôll be or¬≠der¬≠ing su¬≠per¬≠com¬≠put¬≠ers based on Grace. Both sys¬≠tems will be built by HPE‚Äôs Cray group, and are set to come on¬≠line in 2023.

CSCS’s sys­tem, dubbed Alps, will be re­plac­ing their cur­rent Piz Daint sys­tem, a Xeon plus NVIDIA P100 clus­ter. According to the two com­pa­nies, Alps will of­fer 20 ExaFLOPS of AI per­for­mance, which is pre­sum­ably a com­bi­na­tion of CPU, CUDA core, and ten­sor core through­put. When it’s launched, Alps should be the fastest AI-focused su­per­com­puter in the world.

Interestingly, how­ever, CSCS’s am­bi­tions for the sys­tem go be­yond just ma­chine learn­ing work­loads. The in­sti­tute says that they’ll be us­ing Alps as a gen­eral pur­pose sys­tem, work­ing on more tra­di­tional HPC-type tasks as well as AI-focused tasks. This in­cludes CSCS’s tra­di­tional re­search into weather and the cli­mate, which the pre-AI Piz Daint is al­ready used for as well.

As pre­vi­ously men­tioned, Alps will be built by HPE, who will be bas­ing on their pre­vi­ously-an­nounced Cray EX ar­chi­tec­ture. This would make NVIDIA’s Grace the sec­ond CPU op­tion for Cray EX, along with AMD’s EPYC proces­sors.

Meanwhile Los Alamos‚Äô sys¬≠tem is be¬≠ing de¬≠vel¬≠oped as part of an on¬≠go¬≠ing col¬≠lab¬≠o¬≠ra¬≠tion be¬≠tween the lab and NVIDIA, with LANL set to be the Ô¨Ārst US-based cus¬≠tomer to re¬≠ceive a Grace sys¬≠tem. LANL is not dis¬≠cussing the ex¬≠pected per¬≠for¬≠mance of their sys¬≠tem be¬≠yond the fact that it‚Äôs ex¬≠pected to be ‚Äúleadership-class,‚ÄĚ though the lab is plan¬≠ning on us¬≠ing it for 3D sim¬≠u¬≠la¬≠tions, tak¬≠ing ad¬≠van¬≠tage of the largest data set sizes af¬≠forded by Grace. The LANL sys¬≠tem is set to be de¬≠liv¬≠ered in early 2023.


Read the original on www.anandtech.com ¬Ľ

7 285 shares, 15 trendiness, words and minutes reading time

Intel in talks to produce chips for automakers within six to nine months -CEO

(Reuters) - The chief ex­ec­u­tive of Intel Corp told Reuters on Monday the com­pany is in talks to start pro­duc­ing chips for car mak­ers to al­le­vi­ate a short­age that has idled au­to­mo­tive fac­to­ries.

Chief Executive OfÔ¨Ācer Pat Gelsinger said the com¬≠pany is talk¬≠ing to com¬≠pa¬≠nies that de¬≠sign chips for au¬≠tomak¬≠ers about man¬≠u¬≠fac¬≠tur¬≠ing those chips in¬≠side Intel‚Äôs fac¬≠tory net¬≠work, with the goal of pro¬≠duc¬≠ing chips within six to nine months. Gelsinger ear¬≠lier on Monday met with White House of¬≠Ô¨Ā¬≠cials to dis¬≠cuss the semi¬≠con¬≠duc¬≠tor sup¬≠ply chain.

Intel is one of the last com¬≠pa¬≠nies in the semi¬≠con¬≠duc¬≠tor in¬≠dus¬≠try that both de¬≠signs and man¬≠u¬≠fac¬≠tures its own chips. The com¬≠pany last month said it would open its fac¬≠to¬≠ries up to out¬≠side cus¬≠tomers and build fac¬≠to¬≠ries in the United States and Europe in a bid to counter the dom¬≠i¬≠nance of Asian chip man¬≠u¬≠fac¬≠tur¬≠ers such as Taiwan Semiconductor Manufacturing Co and Samsung Electronics Co Ltd .

But Gelsinger said Monday that he told White House of¬≠Ô¨Ā¬≠cials dur¬≠ing the meet¬≠ing that Intel will open its ex¬≠ist¬≠ing fac¬≠tory net¬≠work to auto chip com¬≠pa¬≠nies to pro¬≠vide more im¬≠me¬≠di¬≠ate help with a short¬≠age that has dis¬≠rupted as¬≠sem¬≠bly lines at Ford Motor Co and General Motors Co .

‚ÄúWe‚Äôre hop¬≠ing that some of these things can be al¬≠le¬≠vi¬≠ated, not re¬≠quir¬≠ing a three- or four-year fac¬≠tory build, but maybe six months of new prod¬≠ucts be¬≠ing cer¬≠ti¬≠Ô¨Āed on some of our ex¬≠ist¬≠ing processes,‚ÄĚ Gelsinger said. ‚ÄúWe‚Äôve be¬≠gun those en¬≠gage¬≠ments al¬≠ready with some of the key com¬≠po¬≠nents sup¬≠pli¬≠ers.‚ÄĚ

Gelsinger did not name the com­po­nent sup­pli­ers but said that the work could take place at Intel’s fac­to­ries in Oregon, Arizona, New Mexico, Israel or Ireland.


Read the original on www.reuters.com ¬Ľ

8 284 shares, 9 trendiness, words and minutes reading time

Facebook's 'Clear History' Tool Doesn't Clear Shit

When we talk about Facebook‚Äôs myr¬≠iad foibles and fuck¬≠ups, we‚Äôre usu¬≠ally lay¬≠ing the blame on things that hap¬≠pen within the Big Blue App, or, in¬≠creas¬≠ingly, the so¬≠cial net¬≠work‚Äôs CEO. What‚Äôs less dis¬≠cussed are the com¬≠pa¬≠ny‚Äôs ties to the po¬≠ten¬≠tially mil¬≠lions of sites and ser¬≠vices us¬≠ing its soft¬≠ware‚ÄĒbut now, thank¬≠fully, we can get a win¬≠dow into that for our¬≠selves. But don‚Äôt get too ex¬≠cited.

In a blog post ear¬≠lier to¬≠day, the fa¬≠mously pri¬≠vacy-con¬≠scious Mark Zuckerberg an¬≠nounced‚ÄĒin honor of Data Privacy Day, which is ap¬≠par¬≠ently a thing‚ÄĒthe of¬≠Ô¨Ā¬≠cial roll¬≠out of a long-awaited Off-Facebook Activity tool that al¬≠lows Facebook users to mon¬≠i¬≠tor and man¬≠age the con¬≠nec¬≠tions be¬≠tween Facebook pro¬≠Ô¨Āles and their off-plat¬≠form ac¬≠tiv¬≠ity.

‚ÄúTo help shed more light on these prac¬≠tices that are com¬≠mon yet not al¬≠ways well un¬≠der¬≠stood, to¬≠day we‚Äôre in¬≠tro¬≠duc¬≠ing a new way to view and con¬≠trol your off-Face¬≠book ac¬≠tiv¬≠ity,‚ÄĚ Zuckerberg said in the post. ‚ÄúOff-Facebook Activity lets you see a sum¬≠mary of the apps and web¬≠sites that send us in¬≠for¬≠ma¬≠tion about your ac¬≠tiv¬≠ity, and clear this in¬≠for¬≠ma¬≠tion from your ac¬≠count if you want to.‚ÄĚ

Zuck‚Äôs use of the phrases ‚Äúcontrol your off-Face¬≠book ac¬≠tiv¬≠ity‚ÄĚ and ‚Äúclear this in¬≠for¬≠ma¬≠tion from your ac¬≠count‚ÄĚ is mis¬≠lead¬≠ing‚ÄĒyou‚Äôre not re¬≠ally con¬≠trol¬≠ling or clear¬≠ing much of any¬≠thing. By us¬≠ing this tool, you‚Äôre just telling Facebook to put the data it has on you into two sep¬≠a¬≠rate buck¬≠ets that are oth¬≠er¬≠wise mixed to¬≠gether. Put an¬≠other way, Facebook is of¬≠fer¬≠ing a one-stop-shop to opt-out of any ties be¬≠tween the sites and ser¬≠vices you pe¬≠ruse daily that have some sort of Facebook soft¬≠ware in¬≠stalled and your own-plat¬≠form ac¬≠tiv¬≠ity on Facebook or Instagram.

The only thing you’re clear­ing is a con­nec­tion Facebook made be­tween its data and the data it gets from third par­ties, not the data it­self.

As an ad-tech re¬≠porter, my bread and but¬≠ter in¬≠volves down¬≠load¬≠ing shit that does god-knows-what with your data, which is why I should¬≠n‚Äôt‚Äôve been sur¬≠prised that Facebook hoovered data from more 520 part¬≠ners across the in¬≠ter¬≠net‚ÄĒei¬≠ther sites I‚Äôd vis¬≠ited or apps I‚Äôd down¬≠loaded. For Gizmodo alone, Facebook tracked ‚Äú252 in¬≠ter¬≠ac¬≠tions‚ÄĚ drawn from the hand¬≠ful of plug-ins our blog has in¬≠stalled. (To be clear, you‚Äôre go¬≠ing to run into these kinds of track¬≠ers e.v.e.r.y.w.h.e.r.e.‚ÄĒnot just on our site.)

These plug-ins‚ÄĒor ‚Äúbusiness tools,‚ÄĚ as Facebook de¬≠scribes them‚ÄĒare the pipeline that the com¬≠pany uses to as¬≠cer¬≠tain your off-plat¬≠form ac¬≠tiv¬≠ity and tie it to your on-plat¬≠form iden¬≠tity. As Facebook de¬≠scribes it:

- Jane buys a pair of shoes from an on¬≠line cloth¬≠ing and shoe store.- The store shares Jane‚Äôs ac¬≠tiv¬≠ity with us us¬≠ing our busi¬≠ness tools.- We re¬≠ceive Jane‚Äôs off-Face¬≠book ac¬≠tiv¬≠ity and we save it with her Facebook ac¬≠count. The ac¬≠tiv¬≠ity is saved as ‚Äúvisited the Clothes and Shoes web¬≠site‚ÄĚ and ‚Äúmade a pur¬≠chase‚ÄĚ.- Jane sees an ad on Facebook for a 10% off coupon on her next shoe or cloth¬≠ing pur¬≠chase from the on¬≠line store.

Here‚Äôs the catch, though: When I hit the handy ‚Äúclear his¬≠tory‚ÄĚ but¬≠ton that Facebook now pro¬≠vides, it won‚Äôt do jack shit to stop a given shoe store from shar¬≠ing my data with Facebook‚ÄĒwhich ex¬≠plic¬≠itly laid this out for me when I hit that but¬≠ton:

Your ac­tiv­ity his­tory will be dis­con­nected from your ac­count. We’ll con­tinue to re­ceive your ac­tiv­ity from the busi­nesses and or­ga­ni­za­tions you visit in the fu­ture.

Yes, it‚Äôs con¬≠fus¬≠ing. BafÔ¨āing, re¬≠ally. But ba¬≠si¬≠cally, Facebook has pro¬≠Ô¨Āles on users and non-users alike. Those of you who have Facebook pro¬≠Ô¨Āles can use the new tool to dis¬≠con¬≠nect your Facebook data from the data the com¬≠pany re¬≠ceives from third par¬≠ties. Facebook will still have that third-party-col¬≠lected data and it will con¬≠tinue to col¬≠lect more data, but that bucket of data won‚Äôt be con¬≠nected to your Facebook iden¬≠tity.

The data third par¬≠ties col¬≠lect about you tech¬≠ni¬≠cally is¬≠n‚Äôt Facebook‚Äôs re¬≠spon¬≠si¬≠bil¬≠ity, to be¬≠gin with. If I buy a pair of new sneak¬≠ers from Steve Madden where that pur¬≠chase or brows¬≠ing data goes is ul¬≠ti¬≠mately in Steve Madden‚Äôs metaphor¬≠i¬≠cal hands. And thanks to the won¬≠ders of tar¬≠geted ad¬≠ver¬≠tis¬≠ing, even the sneak¬≠ers I‚Äôm pur¬≠chas¬≠ing in-store aren‚Äôt safe from be¬≠ing added as a data point that can be tied to the col¬≠lec¬≠tive pro¬≠Ô¨Āle Facebook‚Äôs gath¬≠ered on me as a con¬≠sumer. Naturally, it be¬≠hooves who¬≠ever runs mar¬≠ket¬≠ing at Steve Madden‚ÄĒor any¬≠where, re¬≠ally‚ÄĒto plug in as many of those data points as they pos¬≠si¬≠bly can.

For the record, I also tried tog¬≠gling my off-Face¬≠book ac¬≠tiv¬≠ity to keep it from be¬≠ing linked to my ac¬≠count, but was told that, while the com¬≠pany would still be get¬≠ting this in¬≠for¬≠ma¬≠tion from third par¬≠ties, it would just be ‚Äúdisconnected from [my] ac¬≠count.‚ÄĚ

Put an¬≠other way: The way I browse any num¬≠ber of sites and apps will ul¬≠ti¬≠mately still make its way to Facebook, and still be used for tar¬≠geted ad¬≠ver¬≠tis¬≠ing across‚Ķ those sites and apps. Only now, my on-Face¬≠book life‚ÄĒthe cat groups I join, the sta¬≠tuses I com¬≠ment on, the con¬≠certs I‚Äôm ‚Äúinterested‚ÄĚ in (but never ac¬≠tu¬≠ally at¬≠tend)‚ÄĒwon‚Äôt be a part of that pro¬≠Ô¨Āle.

Or put an­other way: Facebook just an­nounced that it still has its ten­ta­cles in every part of your life in a way that’s im­pos­si­ble to un­tan­gle your­self from. Now, it just does­n’t need the so­cial net­work to do it.


Read the original on gizmodo.com ¬Ľ

9 281 shares, 13 trendiness, words and minutes reading time

Building React + Vue support for Tailwind UI ‚Äď Tailwind CSS

Hey! We’re get­ting re­ally close to re­leas­ing React + Vue sup­port for Tailwind UI, so I thought it would be in­ter­est­ing to share some of the be­hind-the-scenes ef­forts that have gone into even mak­ing it pos­si­ble.

From the day we started work¬≠ing on Tailwind UI some¬≠where in mid-2019 I knew that ul¬≠ti¬≠mately it would be 10x more valu¬≠able to peo¬≠ple if they could grab fully in¬≠ter¬≠ac¬≠tive ex¬≠am¬≠ples built us¬≠ing their fa¬≠vorite JS frame¬≠work. Trying to make that hap¬≠pen for the Ô¨Ārst re¬≠lease was way too am¬≠bi¬≠tious though, so we had to Ô¨Āg¬≠ure out how to get there one step at a time.

We de¬≠cided to fo¬≠cus on vanilla HTML Ô¨Ārst be¬≠cause it‚Äôs to¬≠tally uni¬≠ver¬≠sal, and even if some¬≠thing like JSX would be more help¬≠ful for some peo¬≠ple, there are lots of ex¬≠ist¬≠ing tools out there for con¬≠vert¬≠ing HTML to JSX that peo¬≠ple could lean on al¬≠ready.

We also made the hard trade-off not to pro¬≠vide any JS for in¬≠ter¬≠ac¬≠tions like tog¬≠gling a re¬≠spon¬≠sive menu or open¬≠ing and clos¬≠ing a modal di¬≠a¬≠log in the Ô¨Ārst ver¬≠sion. I felt like any¬≠thing we pro¬≠vided would just do more harm than good, be¬≠cause there‚Äôs no one JS frame¬≠work that makes up the ma¬≠jor¬≠ity of the Tailwind user base. If we catered to React de¬≠vel¬≠op¬≠ers, we‚Äôd be mak¬≠ing it harder to use for the 70% of peo¬≠ple not us¬≠ing React. If we catered to Vue de¬≠vel¬≠op¬≠ers, we‚Äôd be mak¬≠ing it harder for the 70% of peo¬≠ple not us¬≠ing Vue. If we tried to write it in cus¬≠tom vanilla JS, well we‚Äôd be mak¬≠ing it harder for lit¬≠er¬≠ally every¬≠one (seriously do you have any idea how much code it takes to build a ro¬≠bust en¬≠ter/‚Äčleave tran¬≠si¬≠tion sys¬≠tem from scratch in JS?)

So in­stead I just doc­u­mented the dif­fer­ent states us­ing com­ments in the HTML, and left it to the end user to wire it up with their fa­vorite JS frame­work. I know a lot of peo­ple love that about Bulma, and I think it was a great ap­proach for us to start with as well.

But once we felt like Tailwind UI was pretty Ô¨āeshed out with hun¬≠dreds of great ex¬≠am¬≠ples, we de¬≠cided it was time to tackle the JS prob¬≠lem and see what we could do.

As an ab¬≠stract con¬≠cept adding ‚ÄúJavaScript sup¬≠port‚ÄĚ to Tailwind UI sounds straight¬≠for¬≠ward, but when you dig in to the de¬≠tails it is not. There are so many de¬≠ci¬≠sions to make about what to even build, and so many trade-offs you have to con¬≠sider when try¬≠ing to make some¬≠thing use¬≠ful for as many peo¬≠ple as pos¬≠si¬≠ble.

I tossed the whole con­cept around in the back of my head for a full year while work­ing on Tailwind UI be­fore I ac­tu­ally had a plan I was happy with. Ultimately, these are the core val­ues I de­cided on when de­sign­ing a so­lu­tion:

The promise of Tailwind UI is that it‚Äôs just a code snip¬≠pet ‚ÄĒ it‚Äôs easy to cus¬≠tomize and adapt by di¬≠rectly edit¬≠ing the code. Any JS ex¬≠am¬≠ples we pro¬≠vide need to re¬≠spect this foun¬≠da¬≠tional idea. The JS needs to be up¬≠date¬≠able. Unlike the markup which we ex¬≠pect peo¬≠ple to just to¬≠tally own and edit to their heart‚Äôs con¬≠tent, the JS needs to come from node_¬≠mod¬≠ules some¬≠how, be¬≠cause build¬≠ing these things right is hard, there are go¬≠ing to be bugs, and we want to be able to Ô¨Āx them for peo¬≠ple with¬≠out ask¬≠ing them to copy a new code snip¬≠pet. On top of that, we don‚Äôt want peo¬≠ple to have to care¬≠fully trans¬≠port 200 lines of JS they did¬≠n‚Äôt write around their code¬≠base, and con¬≠stantly worry about ac¬≠ci¬≠den¬≠tally break¬≠ing some small im¬≠ple¬≠men¬≠ta¬≠tion de¬≠tail by mis¬≠take.It just has to be bet¬≠ter than vanilla HTML. At the end of the day, the most im¬≠por¬≠tant thing is that we make the ex¬≠ist¬≠ing ex¬≠pe¬≠ri¬≠ence bet¬≠ter for peo¬≠ple us¬≠ing the JS frame¬≠works we de¬≠cide to add sup¬≠port for Ô¨Ārst. Any time I found my¬≠self frus¬≠trated by two com¬≠pet¬≠ing trade-offs that made it hard to make some¬≠thing per¬≠fect, ask¬≠ing my¬≠self ‚Äúis this still strictly bet¬≠ter and in no ways worse for frame¬≠work X users than vanilla HTML?‚ÄĚ pro¬≠vided a lot of clar¬≠ity.

The other thing that was re¬≠ally im¬≠por¬≠tant to me is that none of the un¬≠der¬≠ly¬≠ing JS stuff was pro¬≠pri¬≠etary or Tailwind UI-speciÔ¨Āc. To me, Tailwind UI is not a UI kit like Ant Design or Material UI ‚ÄĒ those are great pro¬≠jects but it‚Äôs not what I wanted to build.

To me, Tailwind UI is a col­lec­tion of blue­prints, show­ing you how to build awe­some stuff us­ing tools that are al­ready avail­able to you. If you want to use things ex­actly as they come off the shelf you to­tally can and you’ll get great re­sults. But you should also be able to use Tailwind UI as a help­ful start­ing point, tweak it to the nines, and end up with some­thing that feels uniquely yours, even if we gave you a boost at the be­gin­ning.

So be­fore we could add JavaScript sup­port to Tailwind UI, we needed to build some tools.

Years ago I re¬≠mem¬≠ber see¬≠ing Kent C. Dodds‚Äô down¬≠shift li¬≠brary and think¬≠ing ‚Äúman, this is a cool con¬≠cept¬†‚ÄĒ all of the com¬≠plex be¬≠hav¬≠ior is tucked away in the li¬≠brary, but all of the ac¬≠tual markup and styling is left to the user‚ÄĚ.

This sort of ap¬≠proach is the per¬≠fect Ô¨Āt for Tailwind philo¬≠soph¬≠i¬≠cally, be¬≠cause the en¬≠tire goal of Tailwind is to help you build to¬≠tally cus¬≠tom de¬≠signs more quickly. Tailwind + a li¬≠brary of JS com¬≠po¬≠nents that ab¬≠stract away all of the key¬≠board nav¬≠i¬≠ga¬≠tion and ac¬≠ces¬≠si¬≠bil¬≠ity logic with¬≠out in¬≠clud¬≠ing any de¬≠sign opin¬≠ions would be such a pow¬≠er¬≠ful combo ‚ÄĒ it would let teams build¬≠ing to¬≠tally cus¬≠tom UIs move al¬≠most as fast as teams who were con¬≠tent to use hard-to-cus¬≠tomize, opin¬≠ion¬≠ated frame¬≠works.

We looked to see if there were any other tools out there solv­ing these same prob­lems, and while there were a few awe­some pro­jects in the space (Reach UI and Reakit es­pe­cially at the time, and re­act-aria since start­ing on our own li­brary, phe­nom­e­nal work by all those folks), ul­ti­mately we de­cided that some­thing so im­por­tant for our com­pany would be best to build and con­trol our­selves.

There were two big rea­sons we ended up start­ing our own pro­ject:

We wanted the APIs to work well with a class-based styling so¬≠lu¬≠tion like Tailwind. A lot of the other tools out there ex¬≠pected you to write cus¬≠tom CSS to tar¬≠get the dif¬≠fer¬≠ent bits of each com¬≠po¬≠nent, which is very dif¬≠fer¬≠ent than the work¬≠Ô¨āow you use to style things with Tailwind. We wanted to de¬≠sign some¬≠thing that was very class-friendly. We wanted to sup¬≠port mul¬≠ti¬≠ple frame¬≠works us¬≠ing a con¬≠sis¬≠tent API. There are React li¬≠braries, Vue li¬≠braries, Angular li¬≠braries, and oth¬≠ers, but each one is dif¬≠fer¬≠ent, de¬≠signed by dif¬≠fer¬≠ent peo¬≠ple with dif¬≠fer¬≠ent tastes. We wanted some¬≠thing that would be as con¬≠sis¬≠tent as pos¬≠si¬≠ble from frame¬≠work to frame¬≠work, so that the frame¬≠work-spe¬≠ciÔ¨Āc ex¬≠am¬≠ples in Tailwind UI would¬≠n‚Äôt be rad¬≠i¬≠cally dif¬≠fer¬≠ent from each other.

I was re­ally ex­cited about what we were go­ing to end up with at the end, but holy crap this was go­ing to be a lot of work.

We de¬≠cided to call this pro¬≠ject ‚ÄúHeadless UI‚ÄĚ and in August of last year Robin Malfait joined the team to work on it full-time, pretty much ex¬≠clu¬≠sively.

The very Ô¨Ārst thing he worked on was a Transition com¬≠po¬≠nent for React that would al¬≠low you to add en¬≠ter/‚Äčleave an¬≠i¬≠ma¬≠tions to el¬≠e¬≠ments, en¬≠tirely us¬≠ing classes, and was very in¬≠spired by the com¬≠po¬≠nent in Vue:

This is a great ex¬≠am¬≠ple of what I meant ear¬≠lier when I said we re¬≠ally wanted to de¬≠sign com¬≠po¬≠nents that were ‚Äúclass-friendly‚ÄĚ. This com¬≠po¬≠nent makes it re¬≠ally easy to style your en¬≠ter/‚Äčleave tran¬≠si¬≠tions with reg¬≠u¬≠lar old Tailwind util¬≠ity classes, so it feels just like styling any¬≠thing else in your app. It‚Äôs also not cou¬≠pled to Tailwind in any way though, and you can use what¬≠ever classes you want!

We pub¬≠lished the Ô¨Ārst pub¬≠lic re¬≠lease in October, and it in¬≠cluded React and Vue li¬≠braries with the Ô¨Ārst three com¬≠po¬≠nents:

We landed on a set of APIs that used ‚Äúcompound com¬≠po¬≠nents‚ÄĚ to ab¬≠stract away all of the com¬≠plex¬≠ity while com¬≠mu¬≠ni¬≠cat¬≠ing with each other via con¬≠text (or pro¬≠vide/‚Äčin¬≠ject in Vue).

Here’s what a cus­tom drop­down looks like in React:

im¬≠port { Menu } from ‚Äė@headlessui/react‚Äô

func­tion MyDropdown() {

re­turn (

You‚Äôll no¬≠tice that to do things like style the ‚Äúactive‚ÄĚ drop¬≠down item, we use a ren¬≠der prop (or a scoped slot in Vue):

Render props aren’t as com­mon as they used to be be­cause hooks have re­placed the need for them in many sit­u­a­tions. But for this sort of prob­lem where you need ac­cess to in­ter­nal state that’s man­aged by the com­po­nent you’re con­sum­ing, they are still the right (only?) so­lu­tion, and very el­e­gant.

After re¬≠leas¬≠ing the Ô¨Ārst ver¬≠sion of Headless UI in October, we buck¬≠led down for a cou¬≠ple of months to re¬≠lease Tailwind CSS v2.0, and then spent the last month of the year fo¬≠cused on bug Ô¨Āxes and lots of pro¬≠ject house keep¬≠ing be¬≠fore tak¬≠ing a break for the hol¬≠i¬≠days.

When we came back, we buck¬≠led down hard to get to work on ac¬≠tu¬≠ally adding React + Vue sup¬≠port to Tailwind UI it¬≠self, and the Ô¨Ārst thing we needed to was au¬≠dit all of the in¬≠ter¬≠ac¬≠tive be¬≠hav¬≠ior we needed for the ex¬≠am¬≠ples in Tailwind UI and Ô¨Āg¬≠ure out what Headless UI ab¬≠strac¬≠tions we needed to de¬≠sign.

This was ac¬≠tu¬≠ally a pretty in¬≠ter¬≠est¬≠ing and chal¬≠leng¬≠ing job, be¬≠cause it‚Äôs re¬≠ally not al¬≠ways ob¬≠vi¬≠ous how a cer¬≠tain de¬≠sign-spe¬≠ciÔ¨Āc in¬≠ter¬≠ac¬≠tion should map to an es¬≠tab¬≠lished UI pat¬≠tern that has known ac¬≠ces¬≠si¬≠bil¬≠ity ex¬≠pec¬≠ta¬≠tions.

* A drop­down should be a menu (well, some­times…)

But some are a lot trick­ier. For ex­am­ple, what about mo­bile menus, the kind of thing you open with a ham­burger but­ton?

If it opens kinda like a popup, is that a menu like a drop­down?

What if it slides in from the side of the screen?

What if it just opens in place and pushes the rest of the page fur­ther down?

We worked through ques­tions like this reg­u­larly, and land­ing on good so­lu­tions took a lot of re­search and ex­per­i­men­ta­tion. We’re lucky to have David Luhr on the team who has spe­cial­ized in ac­ces­si­bil­ity for a long time, and with his help we were able to feel re­ally good about the so­lu­tions we landed on.

Here’s what we de­cided we needed in or­der to sup­port the pat­terns that al­ready ex­isted in Tailwind UI:

* Menu Button. Used for drop­down menus that only con­tain links or but­tons, like a lit­tle ac­tions menu at the end of a table row.

* Listbox. For cus¬≠tom se¬≠lect im¬≠ple¬≠men¬≠ta¬≠tions where you want to in¬≠clude ex¬≠tra stuff in the op¬≠tion el¬≠e¬≠ments. For ex¬≠am¬≠ple a coun¬≠try picker where you put a Ô¨āag next to each coun¬≠try.

* Switch. For cus­tom tog­gle switches that be­have like check­boxes.

* Disclosure. For show¬≠ing/‚Äčhid¬≠ing con¬≠tent in place. Think like col¬≠lapsable FAQ ques¬≠tions. Also use¬≠ful for big¬≠ger chunks of UI too though, like a mo¬≠bile menu that opens in place and pushes the rest of the page down.

* Dialog. For, well, modal di¬≠alogs! But also for mo¬≠bile nav¬≠i¬≠ga¬≠tion that slides out from the side of the page, and other ‚Äútake-over‚ÄĚ-style UIs, even if they don‚Äôt look like a tra¬≠di¬≠tional panel-cen¬≠tered-in-the-screen modal.

* Popover. For pan¬≠els that pop up on top of the page when you click a but¬≠ton. This is use¬≠ful for menus where you need lots of cus¬≠tom con¬≠tent that would vi¬≠o¬≠late the strict¬≠ness of reg¬≠u¬≠lar role=‚Äúmenu‚ÄĚ menu but¬≠tons. We use these for some mo¬≠bile menus, Ô¨āy¬≠out menus in nav¬≠i¬≠ga¬≠tion bars, and other in¬≠ter¬≠est¬≠ing places too. It‚Äôs kind of like a menu/‚Äčdis¬≠clo¬≠sure hy¬≠brid.

* Radio Group. For cus­tom ra­dio se­lec­tion UIs, like where you want a set of click­able cards in­stead of a bor­ing lit­tle ra­dio cir­cle.

We ran into tons of chal­lenges build­ing this stuff, es­pe­cially around com­plex stuff like fo­cus man­age­ment, and es­pe­cially around nested fo­cus man­age­ment.

Imaging you have a modal that opens, and in­side that modal there’s a drop­down. You open the modal, then open the drop­down, and hit es­cape. What hap­pens? Well the drop­down should close right, but the modal should stay open.

I guar¬≠an¬≠tee 99% of modals on the in¬≠ter¬≠net would close too in this case, even though they aren‚Äôt sup¬≠posed to. But not ours ‚ÄĒ ours works!

We (well mostly Robin) spent months work­ing on lit­tle de­tails like this to make every­thing as bul­let-proof as pos­si­ble, and while I’m sure there have to be bugs hid­ing in there still some­where, where we ended up feels so rock solid com­pared to al­most every UI you en­counter day-to-day on the web.

We still have a lot of new pat­terns we want to add to Headless UI like tabs, ac­cor­dions, maybe even gulp a datepicker, and we’re look­ing for­ward to ex­plor­ing other frame­works in the fu­ture (Alpine.js is next on our list), but we’re su­per proud to call what we’re re­leas­ing this week Headless UI v1.0 and com­mit to a sta­ble API go­ing for­ward.

We think you’re gonna love it. </TimCook>

With the Headless UI stuff Ô¨Āg¬≠ured out, the next big prob¬≠lem was Ô¨Āg¬≠ur¬≠ing out ex¬≠actly what a React or Vue ver¬≠sion of an ex¬≠ist¬≠ing Tailwind UI ex¬≠am¬≠ple should look like.

The ex¬≠am¬≠ples in Tailwind UI are pure HTML snip¬≠pets ‚ÄĒ you Ô¨Ānd some¬≠thing you like, copy the HTML into your pro¬≠ject, then tweak it as much you like, chop it up into in¬≠di¬≠vid¬≠ual com¬≠po¬≠nents, what¬≠ever you want. We don‚Äôt make any as¬≠sump¬≠tions about how you‚Äôre go¬≠ing to use it, what el¬≠e¬≠ments you‚Äôre go¬≠ing to keep or delete, or how you want to ab¬≠stract away any du¬≠pli¬≠ca¬≠tion with your pre¬≠ferred tools.

This is an easy de¬≠ci¬≠sion when work¬≠ing with pure HTML ‚ÄĒ what other choice do you re¬≠ally even have? But when of¬≠fer¬≠ing frame¬≠work-spe¬≠ciÔ¨Āc ex¬≠am¬≠ples, it gets a lot trick¬≠ier to know ex¬≠actly what to pro¬≠vide.

The biggest ques­tion was how hard should we try to re­move any du­pli­ca­tion, and what are the right ap­proaches to do­ing so?

Both React and Vue are com­po­nent frame­works, and the way you reuse code in your pro­jects is by ex­tract­ing bits of UI into com­po­nents that you can use over and over again.

The chal¬≠lenge is that cre¬≠at¬≠ing com¬≠po¬≠nents like that is al¬≠ways very pro¬≠ject spe¬≠ciÔ¨Āc. Take this list com¬≠po¬≠nent for ex¬≠am¬≠ple:

Fully com¬≠po¬≠nen¬≠tized in a real app, the Ô¨Ā¬≠nal code might look some¬≠thing like this:

It looks su­per clean sure, but it’s forc­ing a lot of opin­ions on you.

For ex­am­ple, it as­sumes the items are team mem­bers. What if you’re build­ing an in­voic­ing app and you want to use this pat­tern for a list of clients in­stead? Hell, you might be us­ing this for a sports bet­ting app and these should be base­ball teams, not even peo­ple!

It also makes as­sump­tions about the shape of a mem­ber ob­ject. It would have to en­code that it’s pulling out a name and an email prop­erty, even though your data might be dif­fer­ent.

The other is¬≠sue is that in frame¬≠works like Vue, you can only have one com¬≠po¬≠nent per Ô¨Āle. This means copy¬≠ing an ex¬≠am¬≠ple that was made up of 4-5 sub¬≠com¬≠po¬≠nents would mean you have to copy 4-5 dif¬≠fer¬≠ent snip¬≠pets, cre¬≠ate Ô¨Āles for each one, and link them all to¬≠gether with the cor¬≠rect names/‚Äčpaths.

To me, some¬≠thing about do¬≠ing all of this for peo¬≠ple felt like go¬≠ing too far, at least for the prob¬≠lem we‚Äôre try¬≠ing to solve to¬≠day. When every¬≠thing is su¬≠per bro¬≠ken up like that with pre¬≠de¬≠Ô¨Āned prop APIs and de¬≠lib¬≠er¬≠ately cho¬≠sen com¬≠po¬≠nent names, it feels like you aren‚Äôt sup¬≠posed to change it any¬≠more. What I love about Tailwind UI is that click¬≠ing the ‚Äúcode‚ÄĚ tab feels like open¬≠ing up some com¬≠plex piece of elec¬≠ton¬≠ics and see¬≠ing all of the cir¬≠cuitry right there in front of you. It‚Äôs a learn¬≠ing op¬≠por¬≠tu¬≠nity, and you can read the markup and class names and un¬≠der¬≠stand how it all works to¬≠gether.

I wres­tled with it for a long time, but ul­ti­mately de­cided that right now we were try­ing to solve two main prob­lems:

Give peo­ple code us­ing the syn­tax they ac­tu­ally need, like giv­ing React users JSX in­stead of HTML so they don’t have to man­u­ally con­vert things like for to html­For. Make the in­ter­ac­tive el­e­ments work out of the box, so drop­downs, mo­bile menus, tog­gles, and every­thing else was ready to go, in­stead of hav­ing to write all of that boil­er­plate JS your­self.

I de­cided that the right so­lu­tion was to fo­cus on solv­ing those prob­lems, and be care­ful not to do any­thing that would turn Tailwind UI into a dif­fer­ent prod­uct.

So this is what’s dif­fer­ent when you look at a React or Vue ex­am­ple com­pared to the vanilla HTML ver­sion:

Each frame¬≠work ex¬≠am¬≠ple uses the right syn¬≠tax ‚ÄĒ React ex¬≠am¬≠ples use JSX, and Vue ex¬≠am¬≠ples are pro¬≠vided in the sin¬≠gle-Ô¨Āle com¬≠po¬≠nent syn¬≠tax. Transitions are real now ‚ÄĒ in¬≠stead of com¬≠ments telling you what classes to add at each phase of a tran¬≠si¬≠tion, the tran¬≠si¬≠tion is just there, us¬≠ing ei¬≠ther a Headless UI tran¬≠si¬≠tion com¬≠po¬≠nent or Vue‚Äôs na¬≠tive tran¬≠si¬≠tion com¬≠po¬≠nent.In¬≠ter¬≠ac¬≠tive el¬≠e¬≠ments are han¬≠dled by Headless UI ‚ÄĒ you‚Äôll see a few im¬≠ports in any ex¬≠am¬≠ple that re¬≠quires JS where we pull in the re¬≠quired Headless UI com¬≠po¬≠nents and then those are used di¬≠rectly in the markup.Any re¬≠peated chunks of markup have been con¬≠verted into ba¬≠sic loops ‚ÄĒ any data-dri¬≠ven loop stuff (like lists of peo¬≠ple, or nav¬≠i¬≠ga¬≠tion items) are ex¬≠tracted into sim¬≠ple vari¬≠ables right there in the ex¬≠am¬≠ple to re¬≠duce du¬≠pli¬≠ca¬≠tion but still keep every¬≠thing to¬≠gether in one place. In your own pro¬≠jects, you‚Äôd swap this out with data from an API or data¬≠base or what¬≠ever, but we keep the ex¬≠am¬≠ples sim¬≠ple and don‚Äôt make any as¬≠sump¬≠tions for you.Icons are pulled in from the Heroicons li¬≠brary. Instead of in¬≠lin¬≠ing the SVG di¬≠rectly when¬≠ever an icon is used, we pull them in from our React/Vue icon li¬≠braries in¬≠stead to keep the markup sim¬≠pler.

Here’s an ex­am­ple of what it ac­tu­ally looks like:

im¬≠port { Menu, Transition } from ‚Äė@headlessui/react‚Äô

im¬≠port { DotsVerticalIcon } from ‚Äė@heroicons/react/solid‚Äô

im¬≠port { Fragment } from ‚Äėreact‚Äô

const peo­ple = [

name: ‚ÄėCalvin Hawkins‚Äô,

email: ’calvin.hawkins@ex­am­ple.com’,



name: ‚ÄėKristen Ramos‚Äô,

email: ’kris­ten.ramos@ex­am­ple.com’,



name: ‚ÄėTed Fox‚Äô,

email: ’ted.fox@ex­am­ple.com’,



ex­port de­fault func­tion Example() {

re­turn (

It‚Äôs still a sin¬≠gle ex¬≠am¬≠ple where you can see every¬≠thing that‚Äôs go¬≠ing on at once, and you can cut it up how¬≠ever makes the most sense for your pro¬≠ject. You get to de¬≠Ô¨Āne your own prop APIs to meet your own needs, name things how¬≠ever makes the most sense for your do¬≠main, and fetch your data in what¬≠ever way works best with the other tech¬≠nolo¬≠gies you work with.

So that’s how it all works from a cus­tomer’s per­spec­tive, but if you’re cu­ri­ous how we ac­tu­ally built this stuff in­ter­nally, it’s pretty in­ter­est­ing and worth talk­ing about.

Tailwind UI is like 450 ex­am­ples or some­thing now, and con­vert­ing all of that stuff to React/Vue by hand would have been ab­solute tor­ture, and im­pos­si­ble to main­tain in the long-term. So we needed some way to au­to­mate it.

If you‚Äôre any¬≠thing like me, the en¬≠tire idea of au¬≠to¬≠mat¬≠i¬≠cally gen¬≠er¬≠at¬≠ing this stuff in dif¬≠fer¬≠ent for¬≠mats might make you cringe. For me at least, my gut re¬≠ac¬≠tion is just ‚Äúwell there goes the hu¬≠man touch ‚ÄĒ it‚Äôs just go¬≠ing to feel like ma¬≠chine-gen¬≠er¬≠ated garbage now‚ÄĚ, and of course that is not ac¬≠cept¬≠able to me at all ‚ÄĒ I want to be proud of the stuff we re¬≠lease, not feel like we had to make re¬≠ally ugly com¬≠pro¬≠mises.

So how­ever we did this, the out­put had to live up to our stan­dards. This meant we were gonna have to build a sys­tem to do this our­selves, from scratch.

For the Ô¨Ārst 2 months of the year, Brad spent all of his time build¬≠ing a cus¬≠tom au¬≠thor¬≠ing chain specif¬≠i¬≠cally for Tailwind UI com¬≠po¬≠nents that could take our HTML and turn it into React code that looked like it was hand-writ¬≠ten by a per¬≠son.

Here‚Äôs how it works ‚ÄĒ in¬≠stead of au¬≠thor¬≠ing our ex¬≠am¬≠ples in vanilla HTML, we au¬≠thor them in a sort of cus¬≠tom Ô¨āa¬≠vor of HTML full of cus¬≠tom el¬≠e¬≠ments that we ul¬≠ti¬≠mately trans¬≠form to vanilla HTML us¬≠ing PostHTML.

Here’s what one of our drop­down ex­am­ples looks like in our in­ter­nal au­thor­ing for­mat:

You can prob­a­bly al­ready see why au­thor­ing things this way makes it so much eas­ier to con­vert to some­thing like React or Vue than just writ­ing the HTML by hand.

We crawl this doc­u­ment as an AST, and ac­tu­ally trans­form it into four for­mats:

The vanilla HTML you get when you copy the snip­pet. The HTML that gets in­jected into the pre­view pane, where we use some very quick and dirty Alpine.js to demo the dif­fer­ent in­ter­ac­tions in the ex­am­ple.The React snip­pet for you to copy.The Vue snip­pet for you to copy.

The key to get­ting sen­si­ble out­put is re­ally just hav­ing to­tal con­trol of the in­put for­mat. It’s still hard work, but when you can en­code the in­tent of each ex­am­ple into a cus­tom in­put for­mat, con­vert­ing that to an­other for­mat turns out so much bet­ter than try­ing to write some­thing that can con­vert ar­bi­trary jQuery to React or some­thing.


Read the original on blog.tailwindcss.com ¬Ľ

10 231 shares, 11 trendiness, words and minutes reading time

The decline of Heroku

Heroku has long been held up as the gold-stan­dard plat­form as a ser­vice (PaaS) for soft­ware de­vel­op­ers to eas­ily de­ploy their code with­out hav­ing to worry about the un­der­ly­ing in­fra­struc­ture, while oth­ers see it as akin to a mag­i­cal fallen civ­i­liza­tion with a lim­ited fu­ture.

‚ÄúThe his¬≠tory of IT is lit¬≠tered with plat¬≠forms peo¬≠ple thought were fan¬≠tas¬≠tic that don‚Äôt ex¬≠ist any¬≠more,‚ÄĚ said James Governor, a founder of the de¬≠vel¬≠oper-fo¬≠cused an¬≠a¬≠lyst Ô¨Ārm RedMonk. ‚ÄúIt had a good run and a huge in¬≠Ô¨āu¬≠ence, but noth¬≠ing lasts for¬≠ever.‚ÄĚ

Heroku’s ar­chi­tec­tural lim­i­ta­tions and the high cost of run­ning a busi­ness on the plat­form have his­tor­i­cally hin­dered its abil­ity to truly scale be­yond a core set of web 2.0 cus­tomers, but there is still hope that Heroku is set­ting it­self up for a glo­ri­ous sec­ond act.

Founded in 2007 by three Ruby de¬≠vel¬≠op¬≠ers‚ÄĒJames Lindenbaum, Adam Wiggins, and Orion Henry‚ÄĒHeroku was bought just three years later, when the SaaS gi¬≠ant Salesforce even¬≠tu¬≠ally beat out VMware to pick the com¬≠pany up for $212 mil¬≠lion when it still had only 30 peo¬≠ple on staff and sup¬≠ported only the Ruby pro¬≠gram¬≠ming lan¬≠guage.

‚ÄúI be¬≠lieve Heroku was one of the most rev¬≠o¬≠lu¬≠tion¬≠ary prod¬≠ucts of its gen¬≠er¬≠a¬≠tion and pushed web de¬≠vel¬≠op¬≠ment fur¬≠ther for¬≠ward than it gets credit for,‚ÄĚ said Jason Warner, head of en¬≠gi¬≠neer¬≠ing at Heroku be¬≠tween 2014 and 2017. ‚ÄúIt is also one of the most con¬≠found¬≠ing, be¬≠cause it was so ahead of its time. It looked like magic at the time, and peo¬≠ple were blown away by it, but it started to cal¬≠cify un¬≠der Salesforce. It should never have been a PaaS; it should have been a mul¬≠ti¬≠lay¬≠ered cake of PaaS with var¬≠i¬≠ous es¬≠cape hatches to build out with Kubernetes or go mul¬≠ti¬≠cloud, but that was¬≠n‚Äôt what was to be.‚ÄĚ

Today, Heroku is part of the broader Salesforce Platform of de¬≠vel¬≠oper tools, but it re¬≠mains a suc¬≠cess¬≠ful busi¬≠ness in its own right, ac¬≠count¬≠ing for hun¬≠dreds of mil¬≠lions of dol¬≠lars in an¬≠nual rev¬≠enues and sup¬≠port¬≠ing a wide range of lan¬≠guages and thou¬≠sands of de¬≠vel¬≠op¬≠ers who run ap¬≠pli¬≠ca¬≠tions on it. ‚ÄúSalesforce has made it more sta¬≠ble, scal¬≠able, and sup¬≠port new lan¬≠guages. The core idea of tak¬≠ing an app and push¬≠ing to the cloud with¬≠out hav¬≠ing to think about servers, with a beau¬≠ti¬≠ful de¬≠vel¬≠oper ex¬≠pe¬≠ri¬≠ence, is the same to¬≠day, and I know that be¬≠cause I am a cus¬≠tomer,‚ÄĚ co¬≠founder Adam Wiggins said.

In prac¬≠tice, us¬≠ing Heroku typ¬≠i¬≠cally in¬≠volves a com¬≠mon run¬≠time of de¬≠ploy¬≠ing to a unique do¬≠main, which routes HTTP re¬≠quests to a vir¬≠tu¬≠al¬≠ized Linux con¬≠tainer‚ÄĒor dyno, as Heroku calls them‚ÄĒspread across a ‚Äúdyno grid‚ÄĚ of AWS servers. Heroku‚Äôs Git server han¬≠dles ap¬≠pli¬≠ca¬≠tion repos¬≠i¬≠tory pushes from per¬≠mit¬≠ted users. There is also the op¬≠tion for ded¬≠i¬≠cated, sin¬≠gle-ten¬≠ant Private Spaces for pre¬≠mium en¬≠ter¬≠prise cus¬≠tomers.

‚ÄúHeroku was one of the Ô¨Ārst real cloud-na¬≠tive de¬≠vel¬≠op¬≠ment en¬≠vi¬≠ron¬≠ments, and they es¬≠sen¬≠tially in¬≠vented the wide¬≠spread model of con¬≠tainer-based com¬≠put¬≠ing,‚ÄĚ said YeÔ¨Ām Natis, a dis¬≠tin¬≠guished vice pres¬≠i¬≠dent at Gartner.

‚ÄúThe thing that blew peo¬≠ple‚Äôs mind was the Git push to de¬≠ploy, which is the core idea peo¬≠ple take away from Heroku, to take away all of this other stuff peo¬≠ple thought they had to do,‚ÄĚ said Heroku co¬≠founder Lindenbaum, now a part¬≠ner at the startup ac¬≠cel¬≠er¬≠a¬≠tor Heavybit. ‚ÄúOur vi¬≠sion was¬≠n‚Äôt to put lip¬≠stick on a pig, but to re¬≠think how this prob¬≠lem is¬≠n‚Äôt a prob¬≠lem any¬≠more.‚ÄĚ

Heroku‚Äôs pop¬≠u¬≠lar¬≠ity has al¬≠ways hinged on its sim¬≠plic¬≠ity, el¬≠e¬≠gance, and us¬≠abil¬≠ity, pi¬≠o¬≠neer¬≠ing the fo¬≠cus on the de¬≠vel¬≠oper ex¬≠pe¬≠ri¬≠ence and aim¬≠ing to make de¬≠ploy¬≠ment as seam¬≠less as the de¬≠vel¬≠op¬≠ment process. ‚Äú[Heroku] was mag¬≠i¬≠cal and every¬≠one that saw it freaked out,‚ÄĚ said Adam Jacob, Chef co¬≠founder and now CEO of the System Initiative.

Ten years on, none of the orig­i­nal co­founders are still at Heroku. Meanwhile, un­der Salesforce, the com­pany has steadily grown its rev­enues but left the core prod­uct largely alone, while broad in­dus­try shifts oc­curred around it.

‚ÄúHeroku is like a fallen civ¬≠i¬≠liza¬≠tion of elves. Beautiful, im¬≠mor¬≠tal, beloved by all who en¬≠coun¬≠tered it‚ÄĒbut still a dead end,‚ÄĚ Jacob tweeted.

‚ÄúWhen I joined Heroku, the vi¬≠sion had been ful¬≠Ô¨Ālled, but it is also sta¬≠tic and has been for some time, which is the frus¬≠trat¬≠ing thing for some peo¬≠ple,‚ÄĚ Warner said.

Although Heroku helped pi¬≠o¬≠neer sim¬≠pli¬≠Ô¨Āed, cloud-na¬≠tive soft¬≠ware de¬≠vel¬≠op¬≠ment tech¬≠niques, it took too long to adapt to the emerg¬≠ing in¬≠dus¬≠try stan¬≠dards of Docker con¬≠tain¬≠ers or¬≠ches¬≠trated by Kubernetes, said Gartner‚Äôs Natis. ‚ÄúAs far as its ar¬≠chi¬≠tec¬≠ture and its pi¬≠o¬≠neer¬≠ing char¬≠ac¬≠ter, I think that stopped with the ac¬≠qui¬≠si¬≠tion [by Salesforce]. I think they got frozen in time.‚ÄĚ

Tod Nielsen, who was Heroku‚Äôs CEO from 2013 to 2016, said from a busi¬≠ness per¬≠spec¬≠tive, ‚ÄúSalesforce did a great job of ex¬≠pand¬≠ing Heroku within cor¬≠po¬≠rates.‚ÄĚ But tech¬≠no¬≠log¬≠i¬≠cally, ‚Äúwhat they gave up was all the ‚Äėcool kid‚Äô in¬≠no¬≠va¬≠tion.‚ÄĚ

Built on AWS EC2 in¬≠stances, Heroku‚Äôs un¬≠der¬≠ly¬≠ing dyno grid sys¬≠tem nat¬≠u¬≠rally trades off com¬≠plex¬≠ity and cus¬≠tomiz¬≠abil¬≠ity for sim¬≠plic¬≠ity and speed. These trade-offs make the plat¬≠form el¬≠e¬≠gant and easy to use, but also some¬≠what in¬≠Ô¨āex¬≠i¬≠ble.

For a cer¬≠tain set of com¬≠pa¬≠nies‚ÄĒnamely those build¬≠ing 12-factor web ap¬≠pli¬≠ca¬≠tions‚ÄĒHeroku has and al¬≠ways will be a piece of tech¬≠ni¬≠cal wiz¬≠ardry. ‚ÄúIt was very pow¬≠er¬≠ful as a de¬≠vel¬≠oper work¬≠Ô¨āow that was highly pro¬≠duc¬≠tive for a cer¬≠tain class of ap¬≠pli¬≠ca¬≠tion, which a lot of star¬≠tups were build¬≠ing at the time,‚ÄĚ said RedMonk‚Äôs Governor.

However, as Heroku ex¬≠panded into other lan¬≠guages, is¬≠sues cropped up. ‚ÄúI think we were pos¬≠si¬≠bly too early in want¬≠ing every¬≠thing to be sim¬≠ple, which be¬≠comes dif¬≠Ô¨Ā¬≠cult when you turn around and try to go to the Java com¬≠mu¬≠nity, with its im¬≠mense amount of tool¬≠ing and deeply em¬≠bed¬≠ded ways of work¬≠ing,‚ÄĚ said Blake Mizerany, who was the Ô¨Ārst full-time en¬≠gi¬≠neer¬≠ing hire at Heroku in 2008. ‚ÄúThat would bite us a lit¬≠tle bit when we spoke to com¬≠pa¬≠nies that wanted to build on Heroku, be¬≠cause they al¬≠ways needed some¬≠thing way off the happy path with Heroku.‚ÄĚ

For or¬≠ga¬≠ni¬≠za¬≠tions that wanted a lit¬≠tle more Ô¨āex¬≠i¬≠bil¬≠ity to run ap¬≠pli¬≠ca¬≠tions where they needed, the ri¬≠val PaaS Cloud Foundry from VMware of¬≠fered a more palat¬≠able route, by al¬≠low¬≠ing for on-premises de¬≠ploy¬≠ments and the sort of com¬≠plex cus¬≠tomiza¬≠tions re¬≠quired to hook into an en¬≠ter¬≠prise en¬≠vi¬≠ron¬≠ment. VMware also in¬≠vested in a con¬≠sult¬≠ing arm, Pivotal Labs, tasked with evan¬≠ge¬≠liz¬≠ing the plat¬≠form ap¬≠proach for more tra¬≠di¬≠tional or¬≠ga¬≠ni¬≠za¬≠tions like Orange or Bank of America in the early 2010s.

Heroku, by com­par­i­son, has been slow to al­low for en­ter­prise cus­tomers to op­er­ate in hy­brid and mul­ti­cloud modes, some­thing Salesforce has looked to ad­dress with the ad­di­tion of Private Spaces in 2016, which al­lows cus­tomers to run in a ded­i­cated en­vi­ron­ment, con­nect to on-premises sys­tems, and se­lect from one of six ge­o­graphic re­gions. Similarly, Salesforce’s re­cently launched Hyperforce should even­tu­ally al­low all Salesforce cus­tomers more choice over where their ser­vices run in the pub­lic cloud.

Where Heroku and other PaaS op­tions thrive is in their abil­ity to lasso com­plex­ity for de­vel­oper teams to bet­ter fo­cus on de­liv­er­ing new fea­tures for cus­tomers. The prob­lem is, most or­ga­ni­za­tions have built-in tech debt and ways of work­ing that must be ac­counted for, mak­ing some­thing as opin­ion­ated as Heroku too con­strain­ing.

‚ÄúThere end up be¬≠ing too many pieces for peo¬≠ple to as¬≠sem¬≠ble and main¬≠tain them¬≠selves, in which case we see peo¬≠ple want¬≠ing some¬≠thing like Heroku and that abil¬≠ity to just fo¬≠cus on writ¬≠ing the ap¬≠pli¬≠ca¬≠tion,‚ÄĚ said Stephen O‚ÄôGrady, the other co¬≠founder of RedMonk. ‚ÄúWe hear this a lot, where cus¬≠tomers are spend¬≠ing like 40% of their time Ô¨Āght¬≠ing Jenkins, for ex¬≠am¬≠ple. The trick is to do this with enough Ô¨āex¬≠i¬≠bil¬≠ity to meet a wide range of use cases, and that is where things like Heroku have proved to be too con¬≠strained or opin¬≠ion¬≠ated.‚ÄĚ

Camille Fournier, head of plat¬≠form en¬≠gi¬≠neer¬≠ing at the hedge fund and Ô¨Ā¬≠nan¬≠cial ser¬≠vices Ô¨Ārm Two Sigma, de¬≠scribes Heroku as the ‚Äúgold stan¬≠dard‚ÄĚ for the de¬≠ploy side of the soft¬≠ware de¬≠vel¬≠op¬≠ment process. However, in her ex¬≠pe¬≠ri¬≠ence, ‚Äúdevelopers will start to meet the lim¬≠its of what a plat¬≠form like Heroku can pro¬≠vide and start to veer off of that path.‚ÄĚ

Fournier be¬≠lieves that any quickly grow¬≠ing en¬≠gi¬≠neer¬≠ing or¬≠ga¬≠ni¬≠za¬≠tion will con¬≠front these lim¬≠its even¬≠tu¬≠ally. ‚ÄúIt tends to be¬≠come ob¬≠vi¬≠ous when you need to build your own plat¬≠form. If you are us¬≠ing Heroku you will hit scal¬≠ing lim¬≠its and see teams peel off and do their own thing,‚ÄĚ she said.

Many or¬≠ga¬≠ni¬≠za¬≠tions that do de¬≠cide to break away from Heroku‚ÄĒlike the stream¬≠ing plat¬≠form Hulu‚ÄĒare look¬≠ing to build their own in¬≠ter¬≠nal plat¬≠form, work¬≠ing count¬≠less hours to chase the vi¬≠sion of a plat¬≠form that re¬≠sem¬≠bles the Heroku ex¬≠pe¬≠ri¬≠ence, but meets spe¬≠ciÔ¨Āc re¬≠quire¬≠ments of their busi¬≠ness.

‚ÄúThe mod¬≠ern tech in¬≠dus¬≠try is ba¬≠si¬≠cally folks just end¬≠lessly re¬≠mak¬≠ing re¬≠makes of Heroku,‚ÄĚ RedMonk an¬≠a¬≠lyst Governor has tweeted. ‚ÄúWhen some¬≠thing is that beau¬≠ti¬≠ful, it is not sur¬≠pris¬≠ing that it spawned its own sub¬≠genre,‚ÄĚ Jacob said.

It is of¬≠ten said that while not many peo¬≠ple bought Velvet Underground records, those who did went out and started a band. For soft¬≠ware de¬≠vel¬≠op¬≠ers of a cer¬≠tain era, Heroku car¬≠ries a sim¬≠i¬≠lar legacy. Every de¬≠vel¬≠oper who came into con¬≠tact with Heroku con¬≠tin¬≠ues to chase some ver¬≠sion of that leg¬≠endary de¬≠vel¬≠oper ex¬≠pe¬≠ri¬≠ence to¬≠day. ‚ÄúIt ab¬≠solutely is the Velvet Underground of de¬≠vel¬≠oper plat¬≠forms,‚ÄĚ Jacob said.

But there‚Äôs a cost, Jacob noted: ‚ÄúEveryone who touched it has an opin¬≠ion. The prob¬≠lem is those opin¬≠ions aren‚Äôt just opin¬≠ions, they are hard con¬≠straints when you run a busi¬≠ness on soft¬≠ware. It‚Äôs not fun¬≠gi¬≠ble and, con¬≠trary to pop¬≠u¬≠lar be¬≠lief, those con¬≠straints are in fact unique.‚ÄĚ

That be¬≠ing said, for many early Heroku en¬≠gi¬≠neers like Mizerany, im¬≠i¬≠ta¬≠tion re¬≠ally is the high¬≠est form of Ô¨āat¬≠tery. ‚ÄúFor me, the fact that we built some¬≠thing that every¬≠one Ô¨Ānds them¬≠selves hav¬≠ing to build to¬≠day, is the biggest pos¬≠si¬≠ble com¬≠pli¬≠ment,‚ÄĚ he said.

Pricing of­ten comes up as a key blocker for or­ga­ni­za­tions who quickly feel like they are out­grow­ing Heroku, even if they re­ally love the de­vel­oper ex­pe­ri­ence.

‚ÄúPricing has al¬≠ways been a buga¬≠boo and we never solved it,‚ÄĚ Warner said. ‚ÄúAt Salesforce you had to make up mar¬≠gin on pric¬≠ing. I think you can scale Heroku‚ÄĒit runs some of the top 20 web¬≠sites in the world‚ÄĒbut you have to think about it dif¬≠fer¬≠ently.‚ÄĚ

Heroku is gen­er­ally priced per dyno, with a bunch of pre­mium add-ons and high-per­for­mance op­tions for en­ter­prise cus­tomers, so the cost goes up pretty quickly as your busi­ness grows. The high­est per­form­ing, 14GB dyno costs $500 per dyno per month, and that’s just the start.

‚ÄúSome are will¬≠ing to pay for that in¬≠cred¬≠i¬≠ble ex¬≠pe¬≠ri¬≠ence, but for many that be¬≠came chal¬≠leng¬≠ing,‚ÄĚ RedMonk‚Äôs Governor said.

Take the soft¬≠ware test¬≠ing com¬≠pany Rainforest, which moved from Heroku to Google‚Äôs man¬≠aged Kubernetes ser¬≠vice (GKE) in 2019 af¬≠ter it started to reach the lim¬≠its of its data¬≠base plan and costs started to spi¬≠ral. ‚ÄúUntil late last year, Rainforest ran most of our pro¬≠duc¬≠tion ap¬≠pli¬≠ca¬≠tions on Heroku ‚Ķ it al¬≠lowed us to scale and re¬≠main ag¬≠ile with¬≠out hir¬≠ing a large ops team, and the over¬≠all de¬≠vel¬≠oper ex¬≠pe¬≠ri¬≠ence is un¬≠par¬≠al¬≠leled. But in 2018 it be¬≠came clear that we were be¬≠gin¬≠ning to out¬≠grow Heroku,‚ÄĚ Rainforest‚Äôs for¬≠mer se¬≠nior ar¬≠chi¬≠tect, Emanuel Evans, wrote in a com¬≠pany blog post.

Furthermore, Evans wrote, Heroku is ex­pen­sive, even with the sav­ings the com­pany made from be­ing able to run every­thing through a small op­er­a­tions team. But Heroku tipped from ex­pen­sive into too ex­pen­sive, at least for cer­tain com­pute-in­ten­sive work­loads, when Rainforest added some im­por­tant se­cu­rity-re­lated fea­tures, such as vir­tual pri­vate cloud.

Then there is the Ô¨Ān¬≠tech PensionBee, which built its mono¬≠lithic Node.js ap¬≠pli¬≠ca¬≠tion from the ground up on Heroku in 2015, un¬≠der¬≠pinned by Salesforce, with all data synced by a pre¬≠mium add-on prod¬≠uct called Heroku Connect.

PensionBee CTO, Jonathan Lister Parsons, sees the price con¬≠cerns around Heroku as overblown when to¬≠tal cost of own¬≠er¬≠ship is ac¬≠counted for. ‚ÄúI think about all the shit you don‚Äôt need to do with Heroku and it is a list with 20 op¬≠er¬≠a¬≠tional things on it,‚ÄĚ he said. ‚ÄúYes, it is ex¬≠pen¬≠sive com¬≠pared to AWS, but you are get¬≠ting a team of a thou¬≠sand peo¬≠ple who are there to run a ser¬≠vice that runs your code very well.‚ÄĚ

That be¬≠ing said, ‚ÄúHeroku Connect is still un¬≠ac¬≠cept¬≠ably ex¬≠pen¬≠sive and, as we grow and scale, it goes past the point where us¬≠ing that so¬≠lu¬≠tion makes sense‚ÄĒand they know that,‚ÄĚ Lister Parsons added.

A Salesforce spokesper¬≠son ac¬≠knowl¬≠edged Heroku‚Äôs cost but said, ‚ÄúCloud op¬≠er¬≠a¬≠tions are ex¬≠pen¬≠sive, and we need to be sure we‚Äôre adding all the costs up. If some¬≠one is com¬≠par¬≠ing IaaS costs to Heroku‚Äôs PaaS of¬≠fer¬≠ing, they may be over¬≠look¬≠ing the stafÔ¨Āng of de¬≠vops, pipelines, in¬≠te¬≠gra¬≠tions, and IaaS sub¬≠strate im¬≠pacts to op¬≠er¬≠a¬≠tional load.‚ÄĚ


Read the original on www.infoworld.com ¬Ľ

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.