NHacker Next
login
▲Sincerity Wins the Warwheresyoured.at
55 points by treadump 7 hours ago | 22 comments
Loading comments...
bananalychee 5 hours ago [-]
What the author presents as "sincerity" comes off as injecting (his) biased views into reporting. The post devolves into a tedious series of anecdotes that ostensibly prove that "context" can reframe a story, and he argues that sincere reporting should take that context into account, which is reasonable in principle, but he doesn't seem to realize that he's only presenting context that suits his worldview and tosses out the rest. For example, he decries journalists being wrong or underemphasizing his bias by failing to account for data that proves him right in retrospect. In the same paragraph, he smears reporters for under-weighing and over-weighing soft data. That's easy to do in hindsight. My takeaway is that he undermines his own premise by demonstrating everything that can go wrong in opinionated reporting: cherry-picking, double standards, and confirmation bias.

P.S.: the most surprising thing to me about this blog post is that it went through an editor.

jszymborski 4 hours ago [-]
> comes off as injecting (his) biased views into reporting

The trouble is that not adding context is also a choice, which also reveals an authors belief on the topic, except with sufficient plausible deniability. This is why the article describes it as cowardly. It isn't sufficient to defer to people in positions in power. You may appear to be neutral to those who don't bother to think about it, but in truth you're just adopting the position of the person whose anecdata you've unthinkingly regurgitated. The job of a journalist is to think, apply rigorous thought, do research, challenge the status quo.

There is no "unbiased" media, just sincere and insincere. Good will arguments and bad will arguments.

We all perceive the world some way, and it isn't always how other people perceive it. What one calls boss-coddling, another might call common sense business. As long as you do your homework, "stand on your shit", and don't just remasticate the pablum handed to you from on high, we'll be fine. Sadly, as pointed out in the article, we're sorta drowning in soggy pablum these days.

bananalychee 2 hours ago [-]
Dispensing from editorialism is a choice, yes, but that only translates to bias if it's done inconsistently. Meanwhile, while contextualizing, and to a greater extent reframing, can also be done in a fair and objective manner, doing it well and consistently is much more difficult.

I don't think that Zitron cares about objectivity nearly as much as he cares about his worldview being validated by reporters, thus the idea that failing to inject context [which promotes that worldview] is inherently insincere. Since journalism is a fairly ideologically homogeneous profession, I can understand how that might appeal to him, but I doubt he'd make that argument from the other side of the fence.

JohnMakin 5 hours ago [-]
> My CoreWeave analysis may seem silly to some because its value has quadrupled — and that’s why I didn’t write that I believed the stock would crater, or really anything about the stock.

I think the underlying belief causing people to believe things like this are "silly" or that AI criticism is overstated is that the market does not really make mistakes, at least not in the aggregate. So, if XYZ company's CEO says "Our product is doing ABC 300000% better and will take over the world!" and its value/revenue is also going up at the same time, that is seen as a sign that the market has validated this view, and it is infallible (to a point). Of course, this ignores that the market has historically and often been completely wrong, and that this type of reasoning is entirely circular - pay no attention to the man (marketing team) behind the curtain or think about it too hard.

radialstub 5 hours ago [-]
> market has validated this view, and it is infallible (to a point)

Irrational Exuberance. Speculative bubbles are scarily common.

MisterKent 6 hours ago [-]
My tech friends and I cannot wait for this agentic bubble to pop. Much like the dotcom bubble, there's absolutely value in AI but the hype is absurd and is actively hurting investments into reasonable things (like just good UX).

The hype and zealotry remind me of a cult. And as I go higher up the chain at my big tech company, the more culty they are in their beliefs. And the less they believe AI can do their specific jobs, and the less they have actually tried to use AI beyond badly summarizing documents they barely read before.

AI, as far as I can tell, has been a net negative for humans. It's made labor cheaper, answers less reliable, reduced the value we placed on creativity and professionals in general, allows mass disinformation, and mostly results in people being lazier and not learning the basics of anything. There are of course spots of brightness, but the hype bubble needs to burst so we can move on.

JohnMakin 5 hours ago [-]
My belief that's kind of settling in after a few years of observation is that I absolutely believe the "hype" claim that AI is a force multiplier. However, lots of things out there are terrible and shouldn't be force multiplied (spam, phishing, scams, etc) or say like, people that are very bad at their jobs. If people like this's output is multiplied, it clearly can and will be very bad. I have seen this play out at a small scale already on some teams I've worked with.

For the maybe ~1-5% of people out there that have something valuable to contribute (that's my number, and I fully believe it) then I think it can be good, but those types also seem to be the most wary of it.

fullshark 5 hours ago [-]
What depresses me is all these people that are leading us with these stupid decisions re: AI will get bonuses and promotions after the bubble pops. All the useless effort getting AI everywhere will be forgotten, no one will care or remember the idiotic decisions and we will all be chasing the new new thing.

Sincerity will not win in the end. VC money and the quest for insurmountable tech driven cash flows is what drives everything. The age of software being driven by sincere engineers trying to build is dead outside niche projects.

tptacek 6 hours ago [-]
In what way is this piece saying something different than Zitron said on June 9 in his "Never Forget What They've Done" piece?

https://www.wheresyoured.at/never-forget-what-theyve-done/

yifanl 6 hours ago [-]
You mean, besides how this one is targeted at journalists and that one is targeted at the tech industry?

The difference, besides everything else is expectations: He expects the tech industry to overhype things because they're salespeople, he expects journalists to call them out when they're overhyping things.

tptacek 6 hours ago [-]
Can you say more? I'm asking seriously; I'd like to have a better understand of "how to read" Zitron, because these pieces are long and emotive. Are they basically just responses to whatever the latest news is, the way David Gerard writes about blockchain stuff? Because I did see the utility in that kind of writing.
rwmj 2 hours ago [-]
The article is a long complaint about "churnalism". It's not exactly a new complaint. Nick Davies wrote a whole book about it in 2008 [1]. But it's getting worse and worse so it's worth reminding people of it.

[1] https://en.wikipedia.org/wiki/Flat_Earth_News_(book)

tptacek 59 minutes ago [-]
Thanks, that makes much more sense than my original read of the article.
altairprime 5 hours ago [-]
The message, if stripped of emotion and audience, may not be wholly unique. The target audience of the two pieces is certainly different, and the emotional tone as well. Zitron doesn’t seem to direct his writing towards any single ‘audience’ week-to-week, as some others do; but each post does always appear to have a target in particular. So the way I would read Zitron, then, is to ask: ‘what is he trying to persuade who of, and how does his emotional intensity uniquely promote that outcome?’ (relative to less-intense others).
yifanl 5 hours ago [-]
Are you asking because of the perceived poor reception to your recent blogpost?

Because I'd say a lot of your questions are answered directly in this latest piece.

tptacek 5 hours ago [-]
I've never written anything that got a better reception than that blog post, much to my chagrin, so I don't know where you're coming from with that. But put it aside: what parts of this Zitron piece are responsive to what I wrote? That would be super helpful to understand.
potatolicious 5 hours ago [-]
I think there's some overlap in the thrust of both pieces but they're pretty distinct topic-wise?

The first piece strikes me as a paean against "enshittification" - the idea that the industry has done good things, but now subsists on a combination of hot air and making existing good things worse. It further makes the specific point that LLMs belong in the "hot air" category and that it shares little with other innovations of the past.

It does touch on what he perceives as overly-friendly press coverage of the above, but I didn't read the piece as focusing on that point.

FWIW, I find Zitron to be an... unreliable commentator on this subject, to put it mildly, but I am not entirely unsympathetic to the point.

The second piece is more specifically focused on the overly-friendly press coverage, and the idea that journalists are either overly credulous, especially to fantastical claims about the tech, or openly corrupted by the parties they are meant to cover.

tptacek 5 hours ago [-]
I feel like I pick up all that stuff but I don't really understand who the audience is for it. It would make more sense to me if these companies were public and taking investment (I mean, Google and Meta are, but they're not "AI plays", and this most recent piece focuses on Anthropic and OpenAI). Then the point of the piece would be, like David Gerard's blockchain pieces, "don't invest in this".
potatolicious 4 hours ago [-]
That's kinda my main beef with Ed's writing - it's pretty unfocused. Both pieces you linked demonstrate this - he careens from point to point.

The lack of focus I feel like comes from the fact that his audience is an amalgamation of multiple groups, much of which is the "tech is over, it's all grifters now" cadre, so a lot of this isn't really meant to be a persuasive argument of anything but rather just a dumping ground of grievances.

I alluded to finding him an unreliable narrator of this topic, and this is why. So much of this audience is so fully committed to "this is spicy autocomplete, a totally non-functional grift on par with NFTs" as a position that it compromises any of the other points he's trying to make.

FWIW I do find some things of his sympathetic - particularly around how much structural risk we're taking on every time the VCs decide to line up the hype cannon behind something. That said, I think it's also fair to say that this is my projection of his argument, because his actual arguments are often too muddied to even draw that level of specificity.

[edit] I continue to follow Ed's writing because a) I think there are glimmers or something in there and b) I treat it as a temperature read of a substantial minority of public opinion. The level of disillusionment and rising anger against our industry is concerning.

camgunz 56 minutes ago [-]
I'm not a subscriber, and I find Zitron's style to be too rage bait to really absorb it all, but I share his tech skeptic perspective. I don't think he has an exhortation as direct as Gerard's "don't invest in this", but his general "there's something fundamentally grifty about tech" resonates, and I think TFA's basic "the press is recklessly credulous to tech industry claims" premise is more or less on point.

(I looked up the interview [0] to be sure and I wanna say I remembered this perfectly I am the greatest) My anecdata here is I quit listening to Hard Fork (Kevin Roose & Casey Newton's NYT podcast about tech) after Roose was interviewing... the Cruise CEO. I'll put the exchange here:

Kevin Roose: Do you feel a similar sense of responsibility for the people who are currently driving for a living to find them new work or to create new work for them?

Kyle Vogt: I think we have to contribute to it, for sure. You know, I think — and that could take the form of to the extent we’re able providing training programs or alternate jobs for people. But more than that, I think it’s interacting with our government and our regulators and letting them know this is coming, when and how, and giving them some notice so we can plan ahead a little bit.

---

I quit listening because I wanted _any follow up_ to that at all (there are zero followups in the entire interview--they should rename the podcast to "No Followups" or "Spew Literally Anything To Our Listeners"), like, "who in Congress are you talking to", "what are your plans for training programs", "how have you staffed these efforts", but like, they start joking about rolling gyms or some bullshit (the irony of putting a fuckin stationary bike in a self-driving car honestly is too much, it just is too much I can't take it). I wouldn't write a screed like Zitron did. I would simply say the press doesn't treat tech seriously. Can you imagine a political interview where you have color commentary? It would literally be a joke.

I don't know if this is a satisfying answer to your "who is the audience for this" question, but TL;DR: I think it's basically... I wouldn't call them tech skeptics but rather people who think tech is actually hugely important and deserves to be treated with greater seriousness.

[0]: https://www.nytimes.com/2023/05/12/podcasts/googles-ai-bonan...

tptacek 42 minutes ago [-]
Ok so this all makes sense but it is basically the exact message I got off the other Zitron post. Tech skepticism I get! But these posts are like 8,000 words; are people skepticizing tech... recreationally?

The post upthread that suggested this was basically just a criticism of how journalism operates was clearer to me (still a point I think he could have made in half as many words, but at least a distinct point).

KerrAvon 6 hours ago [-]
We need more reality-grounded takes like this one. I do have a quibble:

> These LLMs also have “agents” - but for the sake of argument, I’d like to call them “bots.” Bots, because the term “agent” is bullshit and used to make things sound like they can do more than they can[…]

I'd argue "agents" is actually reasonable technical jargon for this purpose, with a history. Tog on Interface (circa 1990) uses the term for a smart software feature in an app from that time period.