*AI Summary*
*Abstract:*
This transcript documents a high-density technical discussion on Hacker News regarding the release of Anthropic’s Claude Opus 4.6 and its CLI companion, Claude Code. The discourse centers on the model’s expanded 1 million (1M) token context window, the introduction of "Agent Teams" for autonomous multi-agent collaboration, and the technical implementation of the Claude Code tool.
Key themes include the validity of "needle-in-a-haystack" benchmarks—specifically a Harry Potter spell-retrieval test—where critics argue that LLM performance often reflects training data memorization rather than active context processing. Software architects analyze the technical debt of the Claude Code CLI, critiquing its high memory footprint (up to 700MB+) resulting from a React/Node.js-based terminal architecture. Further debate addresses the economic sustainability of frontier model inference, operational instability at Anthropic, and the evolving efficacy of agentic workflows versus traditional software engineering.
*Claude Opus 4.6 and Claude Code: Technical Analysis and Community Reception*
* *[16 hours ago] 1M Context & Agent Teams:* Anthropic introduces Opus 4.6 featuring a 1 million token context window and "Agent Teams," an experimental feature allowing multi-agent collaboration via the `CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS` flag.
* *[12 hours ago] Needle-in-a-Haystack Testing:* Users report high success rates (49/50) in retrieving specific data points (Harry Potter spells) across large contexts. Skeptics argue this is a "memorization" test rather than a "reasoning" test, as the source text is likely present in the model's training weights.
* *[15 hours ago] Benchmarking & "Benchmaxxing":* Discussions highlight a 10-point jump in "Terminal Bench" scores. However, developers note a stagnation in "SWE-bench Verified" and suggest that labs may be "benchmaxxing"—optimizing specifically for benchmarks rather than generalizable utility.
* *[15 hours ago] Claude Code Implementation Issues:* Deep-dive analysis reveals Claude Code is a React/Node.js app outputting to a TUI (Terminal User Interface). Architects report massive virtual memory reservation (32.8 GB) and actual footprints of ~746 MB, significantly higher than Rust-based competitors like Codex (15 MB).
* *[13 hours ago] Operational Reliability:* Commenters reference Anthropic’s "status page history," noting frequent downtime and the "Fail Whale" nature of the service during high-load periods or new model rollouts.
* *[15 hours ago] Automated Memory Systems:* Version 2.1.32 of Claude Code includes a "memory" feature where the agent automatically records and recalls project-specific lessons in a `MEMORY.md` file to persist context across sessions.
* *[13 hours ago] Economic Sustainability:* Analysts debate whether the $20–$200/month subscription plans are being heavily subsidized by VC capital, as inference costs for 1M token windows are projected to be significantly higher than current retail pricing.
* *[11 hours ago] Training Data Contamination:* Research is cited showing Gemini and GPT-4 can recite the first Harry Potter book verbatim for 75% of its length, reinforcing the difficulty of testing context retrieval with copyrighted or public-domain materials.
* *[15 hours ago] SVG Generation Performance:* The "Pelican Benchmark" (generating complex SVGs of animals on bicycles) shows marginal improvements in geometric accuracy, though models still struggle with anatomical specifics like legs and joints.
* *[1 hour ago] Real-World Coding Efficacy:* Senior engineers report mixed results; while Opus 4.6 excels at "one-shot" fixes for UI bugs, some users find it increasingly "lobotomized" or prone to ignoring complex constraints in favor of generic patterns compared to earlier snapshots.
*Review Recommendation:*
This topic should be reviewed by *AI Research Scientists, Senior DevOps Engineers, and Systems Architects.* The primary focus for these reviewers would be the trade-offs between high-level agentic abstraction and the underlying resource efficiency/stability of the tools provided.
AI-generated summary created with gemini-3-flash-preview for free via RocketRecap-dot-com. (Input: 83,955 tokens, Output: 895 tokens, Est. cost: $0.04).
Below, I will provide input for an example video (comprising of title, description, and transcript, in this order) and the corresponding abstract and summary I expect. Afterward, I will provide a new transcript that I want a summarization in the same format.
**Please give an abstract of the transcript and then summarize the transcript in a self-contained bullet list format.** Include starting timestamps, important details and key takeaways.
Example Input:
Fluidigm Polaris Part 2- illuminator and camera
mikeselectricstuff
131K subscribers
Subscribed
369
Share
Download
Clip
Save
5,857 views Aug 26, 2024
Fluidigm Polaris part 1 : • Fluidigm Polaris (Part 1) - Biotech g...
Ebay listings: https://www.ebay.co.uk/usr/mikeselect...
Merch https://mikeselectricstuff.creator-sp...
Transcript
Follow along using the transcript.
Show transcript
mikeselectricstuff
131K subscribers
Videos
About
Support on Patreon
40 Comments
@robertwatsonbath
6 hours ago
Thanks Mike. Ooof! - with the level of bodgery going on around 15:48 I think shame would have made me do a board re spin, out of my own pocket if I had to.
1
Reply
@Muonium1
9 hours ago
The green LED looks different from the others and uses phosphor conversion because of the "green gap" problem where green InGaN emitters suffer efficiency droop at high currents. Phosphide based emitters don't start becoming efficient until around 600nm so also can't be used for high power green emitters. See the paper and plot by Matthias Auf der Maur in his 2015 paper on alloy fluctuations in InGaN as the cause of reduced external quantum efficiency at longer (green) wavelengths.
4
Reply
1 reply
@tafsirnahian669
10 hours ago (edited)
Can this be used as an astrophotography camera?
Reply
mikeselectricstuff
·
1 reply
@mikeselectricstuff
6 hours ago
Yes, but may need a shutter to avoid light during readout
Reply
@2010craggy
11 hours ago
Narrowband filters we use in Astronomy (Astrophotography) are sided- they work best passing light in one direction so I guess the arrows on the filter frames indicate which way round to install them in the filter wheel.
1
Reply
@vitukz
12 hours ago
A mate with Channel @extractions&ire could use it
2
Reply
@RobertGallop
19 hours ago
That LED module says it can go up to 28 amps!!! 21 amps for 100%. You should see what it does at 20 amps!
Reply
@Prophes0r
19 hours ago
I had an "Oh SHIT!" moment when I realized that the weird trapezoidal shape of that light guide was for keystone correction of the light source.
Very clever.
6
Reply
@OneBiOzZ
20 hours ago
given the cost of the CCD you think they could have run another PCB for it
9
Reply
@tekvax01
21 hours ago
$20 thousand dollars per minute of run time!
1
Reply
@tekvax01
22 hours ago
"We spared no expense!" John Hammond Jurassic Park.
*(that's why this thing costs the same as a 50-seat Greyhound Bus coach!)
Reply
@florianf4257
22 hours ago
The smearing on the image could be due to the fact that you don't use a shutter, so you see brighter stripes under bright areas of the image as you still iluminate these pixels while the sensor data ist shifted out towards the top. I experienced this effect back at university with a LN-Cooled CCD for Spectroscopy. The stripes disapeared as soon as you used the shutter instead of disabling it in the open position (but fokussing at 100ms integration time and continuous readout with a focal plane shutter isn't much fun).
12
Reply
mikeselectricstuff
·
1 reply
@mikeselectricstuff
12 hours ago
I didn't think of that, but makes sense
2
Reply
@douro20
22 hours ago (edited)
The red LED reminds me of one from Roithner Lasertechnik. I have a Symbol 2D scanner which uses two very bright LEDs from that company, one red and one red-orange. The red-orange is behind a lens which focuses it into an extremely narrow beam.
1
Reply
@RicoElectrico
23 hours ago
PFG is Pulse Flush Gate according to the datasheet.
Reply
@dcallan812
23 hours ago
Very interesting. 2x
Reply
@littleboot_
1 day ago
Cool interesting device
Reply
@dav1dbone
1 day ago
I've stripped large projectors, looks similar, wonder if some of those castings are a magnesium alloy?
Reply
@kevywevvy8833
1 day ago
ironic that some of those Phlatlight modules are used in some of the cheapest disco lights.
1
Reply
1 reply
@bill6255
1 day ago
Great vid - gets right into subject in title, its packed with information, wraps up quickly. Should get a YT award! imho
3
Reply
@JAKOB1977
1 day ago (edited)
The whole sensor module incl. a 5 grand 50mpix sensor for 49 £.. highest bid atm
Though also a limited CCD sensor, but for the right buyer its a steal at these relative low sums.
Architecture Full Frame CCD (Square Pixels)
Total Number of Pixels 8304 (H) × 6220 (V) = 51.6 Mp
Number of Effective Pixels 8208 (H) × 6164 (V) = 50.5 Mp
Number of Active Pixels 8176 (H) × 6132 (V) = 50.1 Mp
Pixel Size 6.0 m (H) × 6.0 m (V)
Active Image Size 49.1 mm (H) × 36.8 mm (V)
61.3 mm (Diagonal),
645 1.1x Optical Format
Aspect Ratio 4:3
Horizontal Outputs 4
Saturation Signal 40.3 ke−
Output Sensitivity 31 V/e−
Quantum Efficiency
KAF−50100−CAA
KAF−50100−AAA
KAF−50100−ABA (with Lens)
22%, 22%, 16% (Peak R, G, B)
25%
62%
Read Noise (f = 18 MHz) 12.5 e−
Dark Signal (T = 60°C) 42 pA/cm2
Dark Current Doubling Temperature 5.7°C
Dynamic Range (f = 18 MHz) 70.2 dB
Estimated Linear Dynamic Range
(f = 18 MHz)
69.3 dB
Charge Transfer Efficiency
Horizontal
Vertical
0.999995
0.999999
Blooming Protection
(4 ms Exposure Time)
800X Saturation Exposure
Maximum Date Rate 18 MHz
Package Ceramic PGA
Cover Glass MAR Coated, 2 Sides or
Clear Glass
Features
• TRUESENSE Transparent Gate Electrode
for High Sensitivity
• Ultra-High Resolution
• Board Dynamic Range
• Low Noise Architecture
• Large Active Imaging Area
Applications
• Digitization
• Mapping/Aerial
• Photography
• Scientific
Thx for the tear down Mike, always a joy
Reply
@martinalooksatthings
1 day ago
15:49 that is some great bodging on of caps, they really didn't want to respin that PCB huh
8
Reply
@RhythmGamer
1 day ago
Was depressed today and then a new mike video dropped and now I’m genuinely happy to get my tear down fix
1
Reply
@dine9093
1 day ago (edited)
Did you transfrom into Mr Blobby for a moment there?
2
Reply
@NickNorton
1 day ago
Thanks Mike. Your videos are always interesting.
5
Reply
@KeritechElectronics
1 day ago
Heavy optics indeed... Spare no expense, cost no object. Splendid build quality. The CCD is a thing of beauty!
1
Reply
@YSoreil
1 day ago
The pricing on that sensor is about right, I looked in to these many years ago when they were still in production since it's the only large sensor you could actually buy. Really cool to see one in the wild.
2
Reply
@snik2pl
1 day ago
That leds look like from led projector
Reply
@vincei4252
1 day ago
TDI = Time Domain Integration ?
1
Reply
@wolpumba4099
1 day ago (edited)
Maybe the camera should not be illuminated during readout.
From the datasheet of the sensor (Onsemi): saturation 40300 electrons, read noise 12.5 electrons per pixel @ 18MHz (quite bad). quantum efficiency 62% (if it has micro lenses), frame rate 1 Hz. lateral overflow drain to prevent blooming protects against 800x (factor increases linearly with exposure time) saturation exposure (32e6 electrons per pixel at 4ms exposure time), microlens has +/- 20 degree acceptance angle
i guess it would be good for astrophotography
4
Reply
@txm100
1 day ago (edited)
Babe wake up a new mikeselectricstuff has dropped!
9
Reply
@vincei4252
1 day ago
That looks like a finger-lakes filter wheel, however, for astronomy they'd never use such a large stepper.
1
Reply
@MRooodddvvv
1 day ago
yaaaaay ! more overcomplicated optical stuff !
4
Reply
1 reply
@NoPegs
1 day ago
He lives!
11
Reply
1 reply
Transcript
0:00
so I've stripped all the bits of the
0:01
optical system so basically we've got
0:03
the uh the camera
0:05
itself which is mounted on this uh very
0:09
complex
0:10
adjustment thing which obviously to set
0:13
you the various tilt and uh alignment
0:15
stuff then there's two of these massive
0:18
lenses I've taken one of these apart I
0:20
think there's something like about eight
0:22
or nine Optical elements in here these
0:25
don't seem to do a great deal in terms
0:26
of electr magnification they're obiously
0:28
just about getting the image to where it
0:29
uh where it needs to be just so that
0:33
goes like that then this Optical block I
0:36
originally thought this was made of some
0:37
s crazy heavy material but it's just
0:39
really the sum of all these Optical bits
0:41
are just ridiculously heavy those lenses
0:43
are about 4 kilos each and then there's
0:45
this very heavy very solid um piece that
0:47
goes in the middle and this is so this
0:49
is the filter wheel assembly with a
0:51
hilariously oversized steper
0:53
motor driving this wheel with these very
0:57
large narrow band filters so we've got
1:00
various different shades of uh
1:03
filters there five Al together that
1:06
one's actually just showing up a silver
1:07
that's actually a a red but fairly low
1:10
transmission orangey red blue green
1:15
there's an excess cover on this side so
1:16
the filters can be accessed and changed
1:19
without taking anything else apart even
1:21
this is like ridiculous it's like solid
1:23
aluminium this is just basically a cover
1:25
the actual wavelengths of these are um
1:27
488 525 570 630 and 700 NM not sure what
1:32
the suffix on that perhaps that's the uh
1:34
the width of the spectral line say these
1:37
are very narrow band filters most of
1:39
them are you very little light through
1:41
so it's still very tight narrow band to
1:43
match the um fluoresence of the dies
1:45
they're using in the biochemical process
1:48
and obviously to reject the light that's
1:49
being fired at it from that Illuminator
1:51
box and then there's a there's a second
1:53
one of these lenses then the actual sort
1:55
of samples below that so uh very serious
1:58
amount of very uh chunky heavy Optics
2:01
okay let's take a look at this light
2:02
source made by company Lumen Dynamics
2:04
who are now part of
2:06
excelitas self-contained unit power
2:08
connector USB and this which one of the
2:11
Cable Bundle said was a TTL interface
2:14
USB wasn't used in uh the fluid
2:17
application output here and I think this
2:19
is an input for um light feedback I
2:21
don't if it's regulated or just a measur
2:23
measurement facility and the uh fiber
2:27
assembly
2:29
Square Inlet there and then there's two
2:32
outputs which have uh lens assemblies
2:35
and this small one which goes back into
2:37
that small Port just Loops out of here
2:40
straight back in So on this side we've
2:42
got the electronics which look pretty
2:44
straightforward we've got a bit of power
2:45
supply stuff over here and we've got
2:48
separate drivers for each wavelength now
2:50
interesting this is clearly been very
2:52
specifically made for this application
2:54
you I was half expecting like say some
2:56
generic drivers that could be used for a
2:58
number of different things but actually
3:00
literally specified the exact wavelength
3:02
on the PCB there is provision here for
3:04
385 NM which isn't populated but this is
3:07
clearly been designed very specifically
3:09
so these four drivers look the same but
3:10
then there's two higher power ones for
3:12
575 and
3:14
520 a slightly bigger heat sink on this
3:16
575 section there a p 24 which is
3:20
providing USB interface USB isolator the
3:23
USB interface just presents as a comport
3:26
I did have a quick look but I didn't
3:27
actually get anything sensible um I did
3:29
dump the Pi code out and there's a few
3:31
you a few sort of commands that you
3:32
could see in text but I didn't actually
3:34
manage to get it working properly I
3:36
found some software for related version
3:38
but it didn't seem to want to talk to it
3:39
but um I say that wasn't used for the
3:41
original application it might be quite
3:42
interesting to get try and get the Run
3:44
hours count out of it and the TTL
3:46
interface looks fairly straightforward
3:48
we've got positions for six opto
3:50
isolators but only five five are
3:52
installed so that corresponds with the
3:54
unused thing so I think this hopefully
3:56
should be as simple as just providing a
3:57
ttrl signal for each color to uh enable
4:00
it a big heat sink here which is there I
4:03
think there's like a big S of metal
4:04
plate through the middle of this that
4:05
all the leads are mounted on the other
4:07
side so this is heat sinking it with a
4:09
air flow from a uh just a fan in here
4:13
obviously don't have the air flow
4:14
anywhere near the Optics so conduction
4:17
cool through to this plate that's then
4:18
uh air cooled got some pots which are
4:21
presumably power
4:22
adjustments okay let's take a look at
4:24
the other side which is uh much more
4:27
interesting see we've got some uh very
4:31
uh neatly Twisted cable assemblies there
4:35
a bunch of leads so we've got one here
4:37
475 up here 430 NM 630 575 and 520
4:44
filters and dcro mirrors a quick way to
4:48
see what's white is if we just shine
4:49
some white light through
4:51
here not sure how it is is to see on the
4:54
camera but shining white light we do
4:55
actually get a bit of red a bit of blue
4:57
some yellow here so the obstacle path
5:00
575 it goes sort of here bounces off
5:03
this mirror and goes out the 520 goes
5:07
sort of down here across here and up
5:09
there 630 goes basically straight
5:13
through
5:15
430 goes across there down there along
5:17
there and the 475 goes down here and
5:20
left this is the light sensing thing
5:22
think here there's just a um I think
5:24
there a photo diode or other sensor
5:26
haven't actually taken that off and
5:28
everything's fixed down to this chunk of
5:31
aluminium which acts as the heat
5:32
spreader that then conducts the heat to
5:33
the back side for the heat
5:35
sink and the actual lead packages all
5:38
look fairly similar except for this one
5:41
on the 575 which looks quite a bit more
5:44
substantial big spay
5:46
Terminals and the interface for this
5:48
turned out to be extremely simple it's
5:50
literally a 5V TTL level to enable each
5:54
color doesn't seem to be any tensity
5:56
control but there are some additional
5:58
pins on that connector that weren't used
5:59
in the through time thing so maybe
6:01
there's some extra lines that control
6:02
that I couldn't find any data on this uh
6:05
unit and the um their current product
6:07
range is quite significantly different
6:09
so we've got the uh blue these
6:13
might may well be saturating the camera
6:16
so they might look a bit weird so that's
6:17
the 430
6:18
blue the 575
6:24
yellow uh
6:26
475 light blue
6:29
the uh 520
6:31
green and the uh 630 red now one
6:36
interesting thing I noticed for the
6:39
575 it's actually it's actually using a
6:42
white lead and then filtering it rather
6:44
than using all the other ones are using
6:46
leads which are the fundamental colors
6:47
but uh this is actually doing white and
6:50
it's a combination of this filter and
6:52
the dichroic mirrors that are turning to
6:55
Yellow if we take the filter out and a
6:57
lot of the a lot of the um blue content
7:00
is going this way the red is going
7:02
straight through these two mirrors so
7:05
this is clearly not reflecting much of
7:08
that so we end up with the yellow coming
7:10
out of uh out of there which is a fairly
7:14
light yellow color which you don't
7:16
really see from high intensity leads so
7:19
that's clearly why they've used the
7:20
white to uh do this power consumption of
7:23
the white is pretty high so going up to
7:25
about 2 and 1 half amps on that color
7:27
whereas most of the other colors are
7:28
only drawing half an amp or so at 24
7:30
volts the uh the green is up to about
7:32
1.2 but say this thing is uh much
7:35
brighter and if you actually run all the
7:38
colors at the same time you get a fairly
7:41
reasonable um looking white coming out
7:43
of it and one thing you might just be
7:45
out to notice is there is some sort
7:46
color banding around here that's not
7:49
getting uh everything s completely
7:51
concentric and I think that's where this
7:53
fiber optic thing comes
7:58
in I'll
8:00
get a couple of Fairly accurately shaped
8:04
very sort of uniform color and looking
8:06
at What's um inside here we've basically
8:09
just got this Square Rod so this is
8:12
clearly yeah the lights just bouncing
8:13
off all the all the various sides to um
8:16
get a nice uniform illumination uh this
8:19
back bit looks like it's all potted so
8:21
nothing I really do to get in there I
8:24
think this is fiber so I have come
8:26
across um cables like this which are
8:27
liquid fill but just looking through the
8:30
end of this it's probably a bit hard to
8:31
see it does look like there fiber ends
8:34
going going on there and so there's this
8:36
feedback thing which is just obviously
8:39
compensating for the any light losses
8:41
through here to get an accurate
8:43
representation of uh the light that's
8:45
been launched out of these two
8:47
fibers and you see uh
8:49
these have got this sort of trapezium
8:54
shape light guides again it's like a
8:56
sort of acrylic or glass light guide
9:00
guess projected just to make the right
9:03
rectangular
9:04
shape and look at this Center assembly
9:07
um the light output doesn't uh change
9:10
whether you feed this in or not so it's
9:11
clear not doing any internal Clos Loop
9:14
control obviously there may well be some
9:16
facility for it to do that but it's not
9:17
being used in this
9:19
application and so this output just
9:21
produces a voltage on the uh outle
9:24
connector proportional to the amount of
9:26
light that's present so there's a little
9:28
diffuser in the back there
9:30
and then there's just some kind of uh
9:33
Optical sensor looks like a
9:35
chip looking at the lead it's a very
9:37
small package on the PCB with this lens
9:40
assembly over the top and these look
9:43
like they're actually on a copper
9:44
Metalized PCB for maximum thermal
9:47
performance and yeah it's a very small
9:49
package looks like it's a ceramic
9:51
package and there's a thermister there
9:53
for temperature monitoring this is the
9:56
475 blue one this is the 520 need to
9:59
Green which is uh rather different OB
10:02
it's a much bigger D with lots of bond
10:04
wise but also this looks like it's using
10:05
a phosphor if I shine a blue light at it
10:08
lights up green so this is actually a
10:10
phosphor conversion green lead which
10:12
I've I've come across before they want
10:15
that specific wavelength so they may be
10:17
easier to tune a phosphor than tune the
10:20
um semiconductor material to get the uh
10:23
right right wavelength from the lead
10:24
directly uh red 630 similar size to the
10:28
blue one or does seem to have a uh a
10:31
lens on top of it there is a sort of red
10:33
coloring to
10:35
the die but that doesn't appear to be
10:38
fluorescent as far as I can
10:39
tell and the white one again a little
10:41
bit different sort of much higher
10:43
current
10:46
connectors a makeer name on that
10:48
connector flot light not sure if that's
10:52
the connector or the lead
10:54
itself and obviously with the phosphor
10:56
and I'd imagine that phosphor may well
10:58
be tuned to get the maximum to the uh 5
11:01
cenm and actually this white one looks
11:04
like a St fairly standard product I just
11:06
found it in Mouse made by luminous
11:09
devices in fact actually I think all
11:11
these are based on various luminous
11:13
devices modules and they're you take
11:17
looks like they taking the nearest
11:18
wavelength and then just using these
11:19
filters to clean it up to get a precise
11:22
uh spectral line out of it so quite a
11:25
nice neat and um extreme
11:30
bright light source uh sure I've got any
11:33
particular use for it so I think this
11:35
might end up on
11:36
eBay but uh very pretty to look out and
11:40
without the uh risk of burning your eyes
11:43
out like you do with lasers so I thought
11:45
it would be interesting to try and
11:46
figure out the runtime of this things
11:48
like this we usually keep some sort
11:49
record of runtime cuz leads degrade over
11:51
time I couldn't get any software to work
11:52
through the USB face but then had a
11:54
thought probably going to be writing the
11:55
runtime periodically to the e s prom so
11:58
I just just scope up that and noticed it
12:00
was doing right every 5 minutes so I
12:02
just ran it for a while periodically
12:04
reading the E squ I just held the pick
12:05
in in reset and um put clip over to read
12:07
the square prom and found it was writing
12:10
one location per color every 5 minutes
12:12
so if one color was on it would write
12:14
that location every 5 minutes and just
12:16
increment it by one so after doing a few
12:18
tests with different colors of different
12:19
time periods it looked extremely
12:21
straightforward it's like a four bite
12:22
count for each color looking at the
12:24
original data that was in it all the
12:26
colors apart from Green were reading
12:28
zero and the green was reading four
12:30
indicating a total 20 minutes run time
12:32
ever if it was turned on run for a short
12:34
time then turned off that might not have
12:36
been counted but even so indicates this
12:37
thing wasn't used a great deal the whole
12:40
s process of doing a run can be several
12:42
hours but it'll only be doing probably
12:43
the Imaging at the end of that so you
12:46
wouldn't expect to be running for a long
12:47
time but say a single color for 20
12:50
minutes over its whole lifetime does
12:52
seem a little bit on the low side okay
12:55
let's look at the camera un fortunately
12:57
I managed to not record any sound when I
12:58
did this it's also a couple of months
13:00
ago so there's going to be a few details
13:02
that I've forgotten so I'm just going to
13:04
dub this over the original footage so um
13:07
take the lid off see this massive great
13:10
heat sink so this is a pel cool camera
13:12
we've got this blower fan producing a
13:14
fair amount of air flow through
13:16
it the connector here there's the ccds
13:19
mounted on the board on the
13:24
right this unplugs so we've got a bit of
13:27
power supply stuff on here
13:29
USB interface I think that's the Cyprus
13:32
microcontroller High speeded USB
13:34
interface there's a zyink spon fpga some
13:40
RAM and there's a couple of ATD
13:42
converters can't quite read what those
13:45
those are but anal
13:47
devices um little bit of bodgery around
13:51
here extra decoupling obviously they
13:53
have having some noise issues this is
13:55
around the ram chip quite a lot of extra
13:57
capacitors been added there
13:59
uh there's a couple of amplifiers prior
14:01
to the HD converter buffers or Andor
14:05
amplifiers taking the CCD
14:08
signal um bit more power spy stuff here
14:11
this is probably all to do with
14:12
generating the various CCD bias voltages
14:14
they uh need quite a lot of exotic
14:18
voltages next board down is just a
14:20
shield and an interconnect
14:24
boardly shielding the power supply stuff
14:26
from some the more sensitive an log
14:28
stuff
14:31
and this is the bottom board which is
14:32
just all power supply
14:34
stuff as you can see tons of capacitors
14:37
or Transformer in
14:42
there and this is the CCD which is a uh
14:47
very impressive thing this is a kf50 100
14:50
originally by true sense then codec
14:53
there ON
14:54
Semiconductor it's 50 megapixels uh the
14:58
only price I could find was this one
15:00
5,000 bucks and the architecture you can
15:03
see there actually two separate halves
15:04
which explains the Dual AZ converters
15:06
and two amplifiers it's literally split
15:08
down the middle and duplicated so it's
15:10
outputting two streams in parallel just
15:13
to keep the bandwidth sensible and it's
15:15
got this amazing um diffraction effects
15:18
it's got micro lenses over the pixel so
15:20
there's there's a bit more Optics going
15:22
on than on a normal
15:25
sensor few more bodges on the CCD board
15:28
including this wire which isn't really
15:29
tacked down very well which is a bit uh
15:32
bit of a mess quite a few bits around
15:34
this board where they've uh tacked
15:36
various bits on which is not super
15:38
impressive looks like CCD drivers on the
15:40
left with those 3 ohm um damping
15:43
resistors on the
15:47
output get a few more little bodges
15:50
around here some of
15:52
the and there's this separator the
15:54
silica gel to keep the moisture down but
15:56
there's this separator that actually
15:58
appears to be cut from piece of
15:59
antistatic
16:04
bag and this sort of thermal block on
16:06
top of this stack of three pel Cola
16:12
modules so as with any Stacks they get
16:16
um larger as they go back towards the
16:18
heat sink because each P's got to not
16:20
only take the heat from the previous but
16:21
also the waste heat which is quite
16:27
significant you see a little temperature
16:29
sensor here that copper block which
16:32
makes contact with the back of the
16:37
CCD and this's the back of the
16:40
pelas this then contacts the heat sink
16:44
on the uh rear there a few thermal pads
16:46
as well for some of the other power
16:47
components on this
16:51
PCB okay I've connected this uh camera
16:54
up I found some drivers on the disc that
16:56
seem to work under Windows 7 couldn't
16:58
get to install under Windows 11 though
17:01
um in the absence of any sort of lens or
17:03
being bothered to the proper amount I've
17:04
just put some f over it and put a little
17:06
pin in there to make a pinhole lens and
17:08
software gives a few options I'm not
17:11
entirely sure what all these are there's
17:12
obviously a clock frequency 22 MHz low
17:15
gain and with PFG no idea what that is
17:19
something something game programmable
17:20
Something game perhaps ver exposure
17:23
types I think focus is just like a
17:25
continuous grab until you tell it to
17:27
stop not entirely sure all these options
17:30
are obviously exposure time uh triggers
17:33
there ex external hardware trigger inut
17:35
you just trigger using a um thing on
17:37
screen so the resolution is 8176 by
17:40
6132 and you can actually bin those
17:42
where you combine multiple pixels to get
17:46
increased gain at the expense of lower
17:48
resolution down this is a 10sec exposure
17:51
obviously of the pin hole it's very uh
17:53
intensitive so we just stand still now
17:56
downloading it there's the uh exposure
17:59
so when it's
18:01
um there's a little status thing down
18:03
here so that tells you the um exposure
18:07
[Applause]
18:09
time it's this is just it
18:15
downloading um it is quite I'm seeing
18:18
quite a lot like smearing I think that I
18:20
don't know whether that's just due to
18:21
pixels overloading or something else I
18:24
mean yeah it's not it's not um out of
18:26
the question that there's something not
18:27
totally right about this camera
18:28
certainly was bodge wise on there um I
18:31
don't I'd imagine a camera like this
18:32
it's got a fairly narrow range of
18:34
intensities that it's happy with I'm not
18:36
going to spend a great deal of time on
18:38
this if you're interested in this camera
18:40
maybe for astronomy or something and
18:42
happy to sort of take the risk of it may
18:44
not be uh perfect I'll um I think I'll
18:47
stick this on eBay along with the
18:48
Illuminator I'll put a link down in the
18:50
description to the listing take your
18:52
chances to grab a bargain so for example
18:54
here we see this vertical streaking so
18:56
I'm not sure how normal that is this is
18:58
on fairly bright scene looking out the
19:02
window if I cut the exposure time down
19:04
on that it's now 1 second
19:07
exposure again most of the image
19:09
disappears again this is looks like it's
19:11
possibly over still overloading here go
19:14
that go down to say say quarter a
19:16
second so again I think there might be
19:19
some Auto gain control going on here um
19:21
this is with the PFG option let's try
19:23
turning that off and see what
19:25
happens so I'm not sure this is actually
19:27
more streaking or which just it's
19:29
cranked up the gain all the dis display
19:31
gray scale to show what um you know the
19:33
range of things that it's captured
19:36
there's one of one of 12 things in the
19:38
software there's um you can see of you
19:40
can't seem to read out the temperature
19:42
of the pelta cooler but you can set the
19:44
temperature and if you said it's a
19:46
different temperature you see the power
19:48
consumption jump up running the cooler
19:50
to get the temperature you requested but
19:52
I can't see anything anywhere that tells
19:54
you whether the cool is at the at the
19:56
temperature other than the power
19:57
consumption going down and there's no
19:59
temperature read out
20:03
here and just some yeah this is just
20:05
sort of very basic software I'm sure
20:07
there's like an API for more
20:09
sophisticated
20:10
applications but so if you know anything
20:12
more about these cameras please um stick
20:14
in the
20:15
comments um incidentally when I was
20:18
editing I didn't notice there was a bent
20:19
pin on the um CCD but I did fix that
20:22
before doing these tests and also
20:24
reactivated the um silica gel desicant
20:26
cuz I noticed it was uh I was getting
20:28
bit of condensation on the window but um
20:31
yeah so a couple of uh interesting but
20:34
maybe not particularly uh useful pieces
20:37
of Kit except for someone that's got a
20:38
very specific use so um I'll stick a
20:42
I'll stick these on eBay put a link in
20:44
the description and say hopefully
20:45
someone could actually make some uh good
20:47
use of these things
Example Output:
**Abstract:**
This video presents Part 2 of a teardown focusing on the optical components of a Fluidigm Polaris biotechnology instrument, specifically the multi-wavelength illuminator and the high-resolution CCD camera.
The Lumen Dynamics illuminator unit is examined in detail, revealing its construction using multiple high-power LEDs (430nm, 475nm, 520nm, 575nm, 630nm) combined via dichroic mirrors and filters. A square fiber optic rod is used to homogenize the light. A notable finding is the use of a phosphor-converted white LED filtered to achieve the 575nm output. The unit features simple TTL activation for each color, conduction cooling, and internal homogenization optics. Analysis of its EEPROM suggests extremely low operational runtime.
The camera module teardown showcases a 50 Megapixel ON Semiconductor KAF-50100 CCD sensor with micro-lenses, cooled by a multi-stage Peltier stack. The control electronics include an FPGA and a USB interface. Significant post-manufacturing modifications ("bodges") are observed on the camera's circuit boards. Basic functional testing using vendor software and a pinhole lens confirms image capture but reveals prominent vertical streaking artifacts, the cause of which remains uncertain (potential overload, readout artifact, or fault).
**Exploring the Fluidigm Polaris: A Detailed Look at its High-End Optics and Camera System**
* **0:00 High-End Optics:** The system utilizes heavy, high-quality lenses and mirrors for precise imaging, weighing around 4 kilos each.
* **0:49 Narrow Band Filters:** A filter wheel with five narrow band filters (488, 525, 570, 630, and 700 nm) ensures accurate fluorescence detection and rejection of excitation light.
* **2:01 Customizable Illumination:** The Lumen Dynamics light source offers five individually controllable LED wavelengths (430, 475, 520, 575, 630 nm) with varying power outputs. The 575nm yellow LED is uniquely achieved using a white LED with filtering.
* **3:45 TTL Control:** The light source is controlled via a simple TTL interface, enabling easy on/off switching for each LED color.
* **12:55 Sophisticated Camera:** The system includes a 50-megapixel Kodak KAI-50100 CCD camera with a Peltier cooling system for reduced noise.
* **14:54 High-Speed Data Transfer:** The camera features dual analog-to-digital converters to manage the high data throughput of the 50-megapixel sensor, which is effectively two 25-megapixel sensors operating in parallel.
* **18:11 Possible Issues:** The video creator noted some potential issues with the camera, including image smearing.
* **18:11 Limited Dynamic Range:** The camera's sensor has a limited dynamic range, making it potentially challenging to capture scenes with a wide range of brightness levels.
* **11:45 Low Runtime:** Internal data suggests the system has seen minimal usage, with only 20 minutes of recorded runtime for the green LED.
* **20:38 Availability on eBay:** Both the illuminator and camera are expected to be listed for sale on eBay.
Here is the real transcript. What would be a good group of people to review this topic? Please summarize provide a summary like they would:
Hacker Newsnew | past | comments | ask | show | jobs | submit login
Claude Opus 4.6 (anthropic.com)
1962 points by HellsMaddy 16 hours ago | hide | past | favorite | 835 comments
ck_one 12 hours ago | next [–]
Just tested the new Opus 4.6 (1M context) on a fun needle-in-a-haystack challenge: finding every spell in all Harry Potter books.
All 7 books come to ~1.75M tokens, so they don't quite fit yet. (At this rate of progress, mid-April should do it ) For now you can fit the first 4 books (~733K tokens).
Results: Opus 4.6 found 49 out of 50 officially documented spells across those 4 books. The only miss was "Slugulus Eructo" (a vomiting spell).
Freaking impressive!
reply
grey-area 7 minutes ago | parent | next [–]
Surely the corpus Opus 4.6 ingested would include whatever reference you used to check the spells were there. I mean, there are probably dozens of pages on the internet like this:
https://www.wizardemporium.com/blog/complete-list-of-harry-p...
Why is this impressive?
Do you think it's actually ingesting the books and only using those as a reference? Is that how LLMs work at all? It seems more likely it's predicting these spell names from all the other references it has found on the internet, including lists of spells.
reply
sigmoid10 1 minute ago | root | parent | next [–]
Most people still don't realize that general public world knowledge is not really a test for a model that was trained on general public world knowledge. I wouldn't be surprised if even proprietary content like the books themselves found their way into the training data, despite what publishers and authors may think of that.
reply
hereonout2 35 minutes ago | parent | prev | next [–]
I was playing about with Chat GPT the other day, uploading screen shots of sheet music and asking it to convert it to ABC notation so I could make a midi file of it.
The results seemed impressive until I noticed some of the "Thinking" statements in the UI.
One made it apparent the model / agent / whatever had read the title from the screenshot and was off searching for existing ABC transcripts of the piece Ode to Joy.
So the whole thing was far less impressive after that, it wasn't reading the score anymore, just reading the title and using the internet to answer my query.
reply
anomaly_ 0 minutes ago | root | parent | next [–]
Sounds pretty human like! Always searching for a shortcut
reply
nobodywillobsrv 33 minutes ago | root | parent | prev | next [–]
Yes I have found that grok for example actually suddenly becomes quite sane when you tell it to stop querying the internet And just rethink the conversation data and answer the question.
It's weird, it's like many agents are now in a phase of constantly getting more information and never just thinking with what they've got.
reply
Szpadel 0 minutes ago | root | parent | next [–]
but isn't it what we wanted? we complained so much that LLM uses deprecated or outdated apis instead of current version because they relied so much on what they remembered
bestham 24 minutes ago | root | parent | prev | next [–]
Touché, that is what we humans are doing to some degree as well.
reply
xiomrze 11 hours ago | parent | prev | next [–]
Honest question, how do you know if it's pulling from context vs from memory?
If I use Opus 4.6 with Extended Thinking (Web Search disabled, no books attached), it answers with 130 spells.
reply
ozim 11 hours ago | root | parent | next [–]
Exactly there was this study where they were trying to make LLM reproduce HP book word for word like giving first sentences and letting it cook.
Basically they managed with some tricks make 99% word for word - tricks were needed to bypass security measures that are there in place for exactly reason to stop people to retrieve training material.
reply
pron 10 hours ago | root | parent | next [–]
This reminds me of https://en.wikipedia.org/wiki/Pierre_Menard,_Author_of_the_Q... :
> Borges's "review" describes Menard's efforts to go beyond a mere "translation" of Don Quixote by immersing himself so thoroughly in the work as to be able to actually "re-create" it, line for line, in the original 17th-century Spanish. Thus, Pierre Menard is often used to raise questions and discussion about the nature of authorship, appropriation, and interpretation.
reply
ck_one 11 hours ago | root | parent | prev | next [–]
Do you remember how to get around those tricks?
reply
djhn 11 hours ago | root | parent | next [–]
This is the paper: https://arxiv.org/abs/2601.02671
Grok and Deepmind IIRC didn’t require tricks.
reply
eek2121 10 hours ago | root | parent | next [–]
This really makes me want to try something similar with content from my own website.
I shut it down a while ago because the number of bots overtake traffic. The site had quite a bit of human traffic (enough to bring in a few hundred bucks a month in ad revenue, and a few hundred more in subscription revenue), however, the AI scrapers really started ramping up and the only way I could realistically continue would be to pay a lot more for hosting/infrastructure.
I had put a ton of time into building out content...thousands of hours, only to have scrapers ignore robots, bypass cloudflare (they didn't have any AI products at the time), and overwhelm my measly infrastructure.
Even now, with the domain pointed at NOTHING, it gets almost 100,000 hits a month. There is NO SERVER on the other end. It is a dead link. The stats come from Cloudflare, where the domain name is hosted.
I'm curious if there are any lawyers who'd be willing to take someone like me on contingency for a large copyright lawsuit.
reply
raphman 18 minutes ago | root | parent | next [–]
a) As an outside observer, I would find such a lawsuit very interesting/valuable. But I guess the financial risk of taking on OpenAI or Anthropic is quite high.
b) If you don't want bots scraping your content and DDOSing you, there are self-hosted alternatives to Cloudflare. The simplest one that I found is https://github.com/splitbrain/botcheck - visitors just need to press a button and get a cookie that lets them through to the website. No proof-of-work or smart heuristics.
reply
apsurd 4 hours ago | root | parent | prev | next [–]
Can we help get your infra cost down to negligible? I'm thinking things like pre-generated static pages and CDNs. I won't assume you hadn't thought of this before, but I'd like to understand more where your non-trivial infra cost come from?
reply
djhn 3 hours ago | root | parent | next [–]
I would be tempted to try and optimise this as well. 100000 hits on an empty domain and ~200 dollars worth of bot traffic sounds wild. Are they using JS-enabled browsers or sim farms that download and re-download images and videos as well?
reply
camdenreslink 9 hours ago | root | parent | prev | next [–]
The new cloudflare products for blocking bots and AI scrapers might be worth a shot if you put so much work into the content.
reply
petercooper 11 hours ago | root | parent | prev | next [–]
One possible trick could be to search and replace them all with nonsense alternatives then see if it extracts those.
reply
andai 11 hours ago | root | parent | next [–]
That might actually boost performance since attention pays attention to stuff that stands out. If I make a typo, the models often hyperfixate on it.
reply
jazzyjackson 6 hours ago | root | parent | prev | next [–]
A fine instruction following task but if harry potter is in the weights of the neural net, it's going to mix some of the real ones with the alternates.
reply
ck_one 11 hours ago | root | parent | prev | next [–]
When I tried it without web search so only internal knowledge it missed ~15 spells.
reply
clanker_fluffer 11 hours ago | root | parent | prev | next [–]
What was your prompt?
reply
matt_lo 4 hours ago | parent | prev | next [–]
use AI to rewrite all the spells from all the books, then try to see if AI can detect the rewritten ones. This will ensure it's not pulling from it's trained data set.
reply
gbalduzzi 3 hours ago | root | parent | next [–]
Neat idea, but why should I use AI for a find and replace?
It feels like shooting a fly with a bazooka
reply
jack_pp 1 hour ago | root | parent | next [–]
it's like hiring someone to come pick up your trash from your house and put it on the curb.
it's fine if you're disabled
reply
imafish 4 minutes ago | root | parent | prev | next [–]
If all you have is a hammer.. ;)
reply
miohtama 3 hours ago | root | parent | prev | next [–]
Bazooka guarantees the hit
reply
xenodium 2 hours ago | root | parent | next [–]
I like LLMs, but guarantees in LLMs are... you know... not guaranteed ;)
reply
throwaway290 1 hour ago | root | parent | next [–]
I think that was the point
reply
luckydata 1 hour ago | root | parent | prev | next [–]
do you know all the spells you're looking for from memory?
reply
wickedsight 1 hour ago | root | parent | next [–]
You could just, you know, Google the list.
reply
bilekas 1 hour ago | root | parent | prev | next [–]
You're missing the point, it's only a testing excersize for the new model.
reply
happyraul 1 hour ago | root | parent | next [–]
No, the point is that you can set up the testing exercise without using an LLM to do a simple find and replace.
reply
bilekas 45 minutes ago | root | parent | next [–]
... I'm not sure if you're trolling or if you missed the point again. The point is to test the contextual ability and correctness of the LLMs ability's to perform actions that would be hopefully guaranteed to not be in the training data.
It has nothing to do about the performance of the string replacement.
reply
LeoPanthera 2 hours ago | root | parent | prev | next [–]
That won't help. The AI replacing them will probably miss the same ones as the AI finding them.
reply
steve1977 47 minutes ago | root | parent | next [–]
I think the question was if it will still find 49 out of 50 if they have been replaced.
reply
golfer 11 hours ago | parent | prev | next [–]
There's lots of websites that list the spells. It's well documented. Could Claude simply be regurgitating knowledge from the web? Example:
https://harrypotter.fandom.com/wiki/List_of_spells
reply
qwertytyyuu 6 hours ago | root | parent | next [–]
Hmm… maybe he could switch out all the spells names slightly different ones and see how that goes
reply
ck_one 11 hours ago | root | parent | prev | next [–]
It didn't use web search. But for sure it has some internal knowledge already. It's not a perfect needle in the hay stack problem but gemini flash was much worse when I tested it last time.
reply
viraptor 11 hours ago | root | parent | next [–]
If you want to really test this, search/replace the names with your own random ones and see if it lists those.
Otherwise, LLMs have most of the books memorised anyway: https://arstechnica.com/features/2025/06/study-metas-llama-3...
reply
jazzyjackson 6 hours ago | root | parent | next [–]
Being that it has the books memorized (huh, just learned another US/UK spelling quirk), I would suppose feeding it the books with altered spells would get you a confused mishmash of data in the context and data in the weights.
reply
ribosometronome 9 hours ago | root | parent | prev | next [–]
Couldn't you just ask the LLM which 50 (or 49) spells appear in the first four Harry Potter books without the data for comparison?
reply
viraptor 9 hours ago | root | parent | next [–]
It's not going to be as consistent. It may get bored of listing them (you know how you can ask for many examples and get 10 in response?), or omit some minor ones for other reasons.
By replacing the names with something unique, you'll get much more certainty.
reply
Grimblewald 9 hours ago | root | parent | prev | next [–]
might not work well, but by navigating to a very harry potter dominant part of latent space by preconditioning on the books you make it more likely to get good results. An example would be taking a base model and prompting "what follows is the book 'X'" it may or may not regurgitate the book correctly. Give it a chunk of the first chapter and let it regurgitate from there and you tend to get fairly faithful recovery, especially for things on gutenberg.
So it might be there, by predcondiditioning latent space to the area of harry potter world, you make it so much more probable that the full spell list is regurgitated from online resources that were also read, while asking naive might get it sometimes, and sometimes not.
the books act like a hypnotic trigger, and may not represent a generalized skill. Hence why replacing with random words would help clarify. if you still get the origional spells, regurgitation confirmed, if it finds the spells, it could be doing what we think. An even better test would be to replace all spell references AND jumble chapters around. This way it cant even "know" where to "look" for the spell names from training.
reply
angst 7 hours ago | root | parent | prev | next [–]
btw it recalls 42 when i asked. (without web search)
full transcript: pastebin.com/sMcVkuwd
reply
f33d5173 7 hours ago | root | parent | next [–]
Not sure how they're being counted, but that adds up to 46 with the pair spells counted separately. But then nox is counted twice, so maybe 45.
reply
heavyset_go 7 hours ago | root | parent | prev | next [–]
No, because you don't know the magic spell (forgive me) of context that can be used to "unlock" that information if it's stored in the NN.
I mean, you can try, but it won't be a definitive answer as to whether that knowledge truly exists or doesn't exist as it is encoded into the NN. It could take a lot of context from the books themselves to get to it.
reply
joshmlewis 11 hours ago | root | parent | prev | next [–]
I think the OP was implying that it's probably already baked into its training data. No need to search the web for that.
reply
obirunda 8 hours ago | root | parent | prev | next [–]
This underestimates how much of the Internet is actually compressed into and is an integral part of the model's weights. Gemini 2.5 can recite the first Harry Potter book verbatim for over 75% of the book.
reply
NiloCK 7 hours ago | root | parent | next [–]
I'm getting astrology when I search for this. Any links on this?
reply
f33d5173 7 hours ago | root | parent | next [–]
Iirc it's not quite true. 75% of the book is more likely to appear than you would expect by chance if prompted with the prior tokens. This suggests that it has the book encoded in its weights, but you can't actually recover it by saying "recite harry potter for me".
reply
jdminhbg 7 hours ago | root | parent | next [–]
Do you happen to know, is that because it can’t recite Harry Potter, or because it’s been instructed not to recite Harry Potter?
reply
jazzyjackson 6 hours ago | root | parent | next [–]
It's a matter of token likelihood... as a continuation, the rest of chapter one is highly likely to follow the first paragraph.
The full text of Chapter One is not the only/likeliest possible response to "recite chapter one of harry potter for me"
reply
jamesfinlayson 5 hours ago | root | parent | prev | next [–]
Instructed not to was my understanding.
reply
obirunda 6 hours ago | root | parent | prev | next [–]
https://arxiv.org/abs/2601.02671?hl=en-US
reply
altmanaltman 4 hours ago | root | parent | prev | next [–]
> But for sure it has some internal knowledge already.
Pretty sure the books had to be included in its training material in full text. It's one of the most popular book series ever created, of course they would train on it. So "some" is an understatement in this case.
reply
Trasmatta 9 hours ago | root | parent | prev | next [–]
Do the same experiment in the Claude web UI. And explicitly turn web searches off. It got almost all of them for me over a couple of prompts. That stuff is already in its training data.
reply
soulofmischief 10 hours ago | root | parent | prev | next [–]
The only worthwhile version of this test involves previously unseen data that could not have been in the training set. Otherwise the results could be inaccurate to the point of harmful.
reply
IAmGraydon 6 hours ago | root | parent | prev | next [–]
I'm not sure what your knowledge level of the inner workings of LLMs is, but a model doesn't need search or even an internet connection to "know" the information if it's in its training dataset. In your example, it's almost guaranteed that the LLM isn't searching books - it's just referencing one of the hundreds of lists of those spells in it's training data.
This is the LLM's magic trick that has everyone fooled into thinking they're intelligent - it can very convincingly cosplay an intelligent being by parroting an intelligent being's output. This is equivalent to making a recording of Elvis, playing it back, and believing that Elvis is actually alive inside of the playback device. And let's face it, if a time traveler brought a modern music playback device back hundreds of years and showed it to everyone, they WOULD think that. Why? Because they have not become accustomed to the technology and have no concept of how it could work. The same is true of LLMs - the technology was thrust on society so quickly that there was no time for people to adjust and understand its inner workings, so most people think it's actually doing something akin to intelligence. The truth is it's just as far from intelligence your music playback device is from having Elvis inside of it.
reply
eek2121 11 hours ago | root | parent | prev | next [–]
Honestly? My advice would be to cook something custom up! You don't need to do all the text yourself. Maybe have AI spew out a bunch of text, or take obscure existing text and insert hidden phrases here or there.
Shoot, I'd even go so far as to write a script that takes in a bunch of text, reorganizes sentences, and outputs them in a random order with the secrets. Kind of like a "Where's Waldo?", but for text
Just a few casual thoughts.
I'm actually thinking about coming up with some interesting coding exercises that I can run across all models. I know we already have benchmarks, however some of the recent work I've done has really shown huge weak points in every model I've run them on.
reply
clhodapp 10 hours ago | root | parent | next [–]
Having AI spew it might suffer from the fact that the spew itself is influenced by AI's weights. I think your best bet would be to use a new human-authored work that was released after the model's context cutoff.
reply
meroes 11 hours ago | parent | prev | next [–]
What is this supposed to show exactly? Those books have been feed into LLMs for years and there's even likely specific RLHF's on extracting spells from HP.
reply
muzani 11 hours ago | root | parent | next [–]
There was a time when I put the EA-Nasir text into base64 and asked AI to convert it. Remarkably it identified the correct text but pulled the most popular translation of the text than the one I gave it.
reply
majewsky 8 hours ago | root | parent | next [–]
Sucks that you got a really shitty response to your prompt. If I were you, the model provider would be receiving my complaint via clay tablet right away.
reply
rvz 11 hours ago | root | parent | prev | next [–]
> What is this supposed to show exactly?
Nothing.
You can be sure that this was already known in the training data of PDFs, books and websites that Anthropic scraped to train Claude on; hence 'documented'. This is why tests like what the OP just did is meaningless.
Such "benchmarks" are performative to VCs and they do not ask why isn't the research and testing itself done independently but is almost always done by their own in-house researchers.
reply
zamadatix 12 hours ago | parent | prev | next [–]
To be fair, I don't think "Slugulus Eructo" (the name) is actually in the books. This is what's in my copy:
> The smug look on Malfoy’s face flickered.
> “No one asked your opinion, you filthy little Mudblood,” he spat.
> Harry knew at once that Malfoy had said something really bad because there was an instant uproar at his words. Flint had to dive in front of Malfoy to stop Fred and George jumping on him, Alicia shrieked, “How dare you!”, and Ron plunged his hand into his robes, pulled out his wand, yelling, “You’ll pay for that one, Malfoy!” and pointed it furiously under Flint’s arm at Malfoy’s face.
> A loud bang echoed around the stadium and a jet of green light shot out of the wrong end of Ron’s wand, hitting him in the stomach and sending him reeling backward onto the grass.
> “Ron! Ron! Are you all right?” squealed Hermione.
> Ron opened his mouth to speak, but no words came out. Instead he gave an almighty belch and several slugs dribbled out of his mouth onto his lap.
reply
sobjornstad 10 hours ago | root | parent | next [–]
I have a vague recollection that it might come up named as such in Half-Blood Prince, written in Snape's old potions textbook?
In support of that hypothesis, the Fandom site lists it as “mentioned” in Half-Blood Prince, but it says nothing else and I'm traveling and don't have a copy to check, so not sure.
reply
zamadatix 8 hours ago | root | parent | next [–]
Hmm, I don't get a hit for "slugulus" or "eructo" (case insensitive) in any of the 7. Interestingly two mentions of "vomit" are in book 6, but neither in reference to to slugs (plenty of Slughorn of course!). Book 5 was the only other one a related hit came up:
> Ron nodded but did not speak. Harry was reminded forcibly of the time that Ron had accidentally put a slug-vomiting charm on himself. He looked just as pale and sweaty as he had done then, not to mention as reluctant to open his mouth.
There could be something with regional variants but I'm doubtful as the Fandom site uses LEGO Harry Potter: Years 1-4 as the citation of the spell instead of a book.
Maybe the real LLM is the universe and we're figuring this out for someone on Slacker News a level up!
reply
ck_one 11 hours ago | root | parent | prev | next [–]
Then it's fair that id didn't find it
reply
muzani 10 hours ago | parent | prev | next [–]
There's a benchmark which works similarly but they ask harder questions, also based on books https://fiction.live/stories/Fiction-liveBench-Feb-21-2025/o...
I guess they have to add more questions as these context windows get bigger.
reply
kybernetikos 10 hours ago | parent | prev | next [–]
I recently got junie to code me up an MCP for accessing my calibre library. https://www.npmjs.com/package/access-calibre
My standard test for that was "Who ends up with Bilbo's buttons?"
reply
dwa3592 10 hours ago | parent | prev | next [–]
have another LLM (gemini, chatgpt) make up 50 new spells. insert those and test and maybe report here :)
reply
dom96 9 hours ago | parent | prev | next [–]
I often wonder how much of the Harry Potter books were used in the training. How long before some LLM is able to regurgitate full HP books without access to the internet?
reply
bartman 11 hours ago | parent | prev | next [–]
Have you by any chance tried this with GPT 4.1 too (also 1M context)?
reply
siwatanejo 8 hours ago | parent | prev | next [–]
> All 7 books come to ~1.75M tokens
How do you know? Each word is one token?
reply
koakuma-chan 8 hours ago | root | parent | next [–]
You can download the books and run them through a tokenizer. I did that half a year ago and got ~2M.
reply
LanceJones 11 hours ago | parent | prev | next [–]
Assuming this experiment involved isolating the LLM from its training set?
reply
irishcoffee 10 hours ago | parent | prev | next [–]
The top comment is about finding basterized latin words from childrens books. The future is here.
reply
mhink 6 hours ago | root | parent | next [–]
> basterized
And yet, it's still somewhat better than the Hacker News comment using bastardized English words.
reply
Geste 10 hours ago | root | parent | prev | next [–]
I'll have some of that coffee too, this is quite a sad time we're living where this is a proper use of our limited resources.
reply
guluarte 11 hours ago | parent | prev | next [–]
you can get the same result just asking opus/gpt, it is probably internalized knowledge from reddit or similar sites.
reply
ck_one 11 hours ago | root | parent | next [–]
If you just ask it you don't get the same result. Around 13 spells were missing when I just prompted Opus 4.6 without the books as context.
reply
guluarte 6 hours ago | root | parent | next [–]
because it is non-deterministic, i just did and got 50 spells.
prompt: finding every spell in all Harry Potter books in the first 4 books, only the officially documented spells across those 4 books, there are 50, without using websearch
Here's my best compilation from memory, organized by book. I'll be upfront: getting to exactly 50 is tricky without cross-referencing, so some might be debatable or I might miss a couple. Here goes:
---
*Book 1 - Philosopher's/Sorcerer's Stone*
1. *Lumos* - produces light 2. *Nox* - extinguishes Lumos 3. *Alohomora* - unlocks doors 4. *Wingardium Leviosa* - levitation 5. *Petrificus Totalus* - full body-bind 6. *Locomotor Mortis* - leg-locker curse 7. *Incendio* - produces fire 8. *Reparo* - mends broken objects
*Book 2 - Chamber of Secrets*
9. *Expelliarmus* - disarms opponent 10. *Rictusempra* - tickling charm 11. *Tarantallegra* - forces target to dance 12. *Serpensortia* - conjures a snake 13. *Finite Incantatem* - ends spell effects 14. *Obliviate* - erases memories 15. *Aparecium* - reveals invisible ink 16. *Immobulus* - freezes target in place 17. *Homorphus* - mentioned by Lockhart (werewolf cure claim) 18. *Peskipiksi Pesternomi* - Lockhart's failed pixie spell
*Book 3 - Prisoner of Azkaban*
19. *Expecto Patronum* - produces a Patronus 20. *Riddikulus* - repels a Boggart 21. *Waddiwasi* - shoots objects at target 22. *Impervius* - waterproofing charm 23. *Dissendium* - opens secret passage 24. *Ferula* - conjures bandages/splint 25. *Mobilicorpus* - moves a body 26. *Lumos Maxima* - intense light
*Book 4 - Goblet of Fire*
27. *Accio* - summoning charm 28. *Avada Kedavra* - killing curse 29. *Crucio* - Cruciatus curse (torture) 30. *Imperio* - Imperius curse (control) 31. *Stupefy* - stunning spell 32. *Engorgio* - enlarges target 33. *Reducio* - shrinks target 34. *Sonorus* - amplifies voice 35. *Quietus* - reverses Sonorus 36. *Morsmordre* - conjures the Dark Mark 37. *Priori Incantatem* - reveals last spell cast 38. *Deletrius* - erases magical residue 39. *Densaugeo* - enlarges teeth 40. *Furnunculus* - causes boils 41. *Impedimenta* - slows/stops target 42. *Reducto* - blasts solid objects 43. *Diffindo* - severing charm 44. *Relashio* - releases sparks/grip 45. *Orchideous* - conjures flowers 46. *Avis* - conjures birds 47. *Point Me* - Four-Point Spell (compass) 48. *Ennervate* - revives stunned person 49. *Protego* - shield charm 50. *Conjunctivitis Curse* - affects eyesight (Krum on the dragon)
---
A few caveats: some of these (like Lumos Maxima, Homorphus, Peskipiksi Pesternomi) are borderline since they're either mentioned rather than properly cast, or might be film additions that bleed into memory. The Conjunctivitis Curse is described but its incantation isn't explicitly given in the text. And Protego might technically first appear with its incantation in Book 5 during DA practice rather than Book 4.
If you want, I can turn this into a spreadsheet or document with columns for spell name, effect, who casts it, and which chapter.
reply
dr_dshiv 2 hours ago | parent | prev | next [–]
Comparison to another model?
reply
TheRealPomax 9 hours ago | parent | prev | next [–]
That doesn't seem a super useful test for a model that's optimized for programming?
reply
huangmeng 7 hours ago | parent | prev | next [–]
you are rich
reply
IhateAI 8 hours ago | parent | prev | next [–]
like I often say, these tools are mostly useful for people to do magic tricks on themselves (and to convince C-suites that they can lower pay, and reduce staff if they pay Anthropic half their engineering budget lmao )
reply
dudewhocodes 1 hour ago | parent | prev | next [–]
There are websites with the spells listed... which makes this a search problem. Why is an LLM used here?
reply
bilekas 1 hour ago | root | parent | next [–]
It's just a benchmark test excersize.
reply
gizmodo59 15 hours ago | prev | next [–]
5.3 codex https://openai.com/index/introducing-gpt-5-3-codex/ crushes with a 77.3% in Terminal Bench. The shortest lived lead in less than 35 minutes. What a time to be alive!
reply
wasmainiac 14 hours ago | parent | next [–]
Dumb question. Can these benchmarks be trusted when the model performance tends to vary depending on the hours and load on OpenAI’s servers? How do I know I’m not getting a severe penalty for chatting at the wrong time. Or even, are the models best after launch then slowly eroded away at to more economical settings after the hype wears off?
reply
tedsanders 13 hours ago | root | parent | next [–]
We don't vary our model quality with time of day or load (beyond negligible non-determinism). It's the same weights all day long with no quantization or other gimmicks. They can get slower under heavy load, though.
(I'm from OpenAI.)
reply
wasmainiac 11 hours ago | root | parent | next [–]
Thanks for the response, I appreciate it. I do notice variation in quality throughout the day. I use it primarily for searching documentation since it’s faster than google in most case, often it is on point, but also it seems off at times, inaccurate or shallow maybe. In some cases I just end the session.
reply
nl 10 hours ago | root | parent | next [–]
Usually I find this kind of variation is due to context management.
Accuracy can decreases at large context sizes. OpenAI's compaction handles this better than anyone else, but it's still an issue.
If you are seeing this kind of thing start a new chat and re-run the same query. You'll usually see an improvement.
reply
wasmainiac 3 hours ago | root | parent | next [–]
I don't think so. I am aware that large contexts impacts performance. In long chats an old topic will someone be brought up in new responses, and the direction of the mode is not as focused.
Regardless I tend to use new chats often.
reply
repeekad 8 hours ago | root | parent | prev | next [–]
This is called context rot
reply
charcircuit 3 hours ago | root | parent | next [–]
I thought context rot was only for long distance queries.
reply
GorbachevyChase 8 hours ago | root | parent | prev | next [–]
Hi Ted. I think that language models are great, and they’ve enabled me to do passion projects I never would have attempted before. I just want to say thanks.
reply
zamadatix 12 hours ago | root | parent | prev | next [–]
I appreciate you taking the time to respond to these kinds of questions the last few days.
reply
robertclaus 3 hours ago | root | parent | prev | next [–]
Hi Ted! Small world to see you here!
reply
Trufa 13 hours ago | root | parent | prev | next [–]
Can you be more specific than this? does it vary in time from launch of a model to the next few months, beyond tinkering and optimization?
reply
tedsanders 12 hours ago | root | parent | next [–]
Yeah, happy to be more specific. No intention of making any technically true but misleading statements.
The following are true:
- In our API, we don't change model weights or model behavior over time (e.g., by time of day, or weeks/months after release)
- Tiny caveats include: there is a bit of non-determinism in batched non-associative math that can vary by batch / hardware, bugs or API downtime can obviously change behavior, heavy load can slow down speeds, and this of course doesn't apply to the 'unpinned' models that are clearly supposed to change over time (e.g., xxx-latest). But we don't do any quantization or routing gimmicks that would change model weights.
- In ChatGPT and Codex CLI, model behavior can change over time (e.g., we might change a tool, update a system prompt, tweak default thinking time, run an A/B test, or ship other updates); we try to be transparent with our changelogs (listed below) but to be honest not every small change gets logged here. But even here we're not doing any gimmicks to cut quality by time of day or intentionally dumb down models after launch. Model behavior can change though, as can the product / prompt / harness.
ChatGPT release notes: https://help.openai.com/en/articles/6825453-chatgpt-release-...
Codex changelog: https://developers.openai.com/codex/changelog/
Codex CLI commit history: https://github.com/openai/codex/commits/main/
reply
newswasboring 5 minutes ago | root | parent | next [–]
>there is a bit of non-determinism in batched non-associative math that can vary by batch / hardware
Maybe a dumb question but does this mean model quality may vary based on which hardware your request gets routed to?
reply
Trufa 10 hours ago | root | parent | prev | next [–]
I ask then unironically then, am I imagining that models are great when they start and degrade over time?
I've had this perceived experience so many times, and while of course it's almost impossible to be objective about this, it just seem so in your face.
I don't discard being novelty plus getting used to it, plus psychological factors, do you have any takes on this?
reply
jason_oster 8 hours ago | root | parent | next [–]
You might be susceptible to the honeymoon effect. If you have ever felt a dopamine rush when learning a new programming language or framework, this might be a good indication.
Once the honeymoon wears off, the tool is the same, but you get less satisfaction from it.
Just a guess! Not trying to psychoanalyze anyone.
reply
wasmainiac 43 minutes ago | root | parent | next [–]
I don’t think so. I notice the same thing, but I just use it like google most of the time, a service that used to be good. I’m not getting a dopamine rush off this, it’s just part of my day.
reply
jychang 12 hours ago | root | parent | prev | next [–]
What about the juice variable?
https://www.reddit.com/r/OpenAI/comments/1qv77lq/chatgpt_low...
reply
tedsanders 12 hours ago | root | parent | next [–]
Yep, we recently sped up default thinking times in ChatGPT, as now documented in the release notes: https://help.openai.com/en/articles/6825453-chatgpt-release-...
The intention was purely making the product experience better, based on common feedback from people (including myself) that wait times were too long. Cost was not a goal here.
If you still want the higher reliability of longer thinking times, that option is not gone. You can manually select Extended (or Heavy, if you're a Pro user). It's the same as at launch (though we did inadvertently drop it last month and restored it yesterday after Tibor and others pointed it out).
reply
tgrowazay 12 hours ago | root | parent | prev | next [–]
Isn’t that just how many steps at most a reasoning model should do?
reply
ComplexSystems 12 hours ago | root | parent | prev | next [–]
Do you ever replace ChatGPT models with cheaper, distilled, quantized, etc ones to save cost?
reply
tedsanders 3 hours ago | root | parent | next [–]
We do care about cost, of course. If money didn't matter, everyone would get infinite rate limits, 10M context windows, and free subscriptions. So if we make new models more efficient without nerfing them, that's great. And that's generally what's happened over the past few years. If you look at GPT-4 (from 2023), it was far less efficient than today's models, which meant it had slower latency, lower rate limits, and tiny context windows (I think it might have been like 4K originally, which sounds insanely low now). Today, GPT-5 Thinking is way more efficient than GPT-4 was, but it's also way more useful and way more reliable. So we're big fans of efficiency as long as it doesn't nerf the utility of the models. The more efficient the models are, the more we can crank up speeds and rate limits and context windows.
That said, there are definitely cases where we intentionally trade off intelligence for greater efficiency. For example, we never made GPT-4.5 the default model in ChatGPT, even though it was an awesome model at writing and other tasks, because it was quite costly to serve and the juice wasn't worth the squeeze for the average person (no one wants to get rate limited after 10 messages). A second example: in our API, we intentionally serve dumber mini and nano models for developers who prioritize speed and cost. A third example: we recently reduced the default thinking times in ChatGPT to speed up the times that people were having to wait for answers, which in a sense is a bit of a nerf, though this decision was purely about listening to feedback to make ChatGPT better and had nothing to do with cost (and for the people who want longer thinking times, they can still manually select Extended/Heavy).
I'm not going to comment on the specific techniques used to make GPT-5 so much more efficient than GPT-4, but I will say that we don't do any gimmicks like nerfing by time of day or nerfing after launch. And when we do make newer models more efficient than older models, it mostly gets returned to people in the form of better speeds, rate limits, context windows, and new features.
reply
jghn 12 hours ago | root | parent | prev | next [–]
He literally said no to this in his GP post
reply
joshvm 13 hours ago | root | parent | prev | next [–]
My gut feeling is that performance is more heavily affected by harnesses which get updated frequently. This would explain why people feel that Claude is sometimes more stupid - that's actually accurate phrasing, because Sonnet is probably unchanged. Unless Anthropic also makes small A/B adjustments to weights and technically claims they don't do dynamic degradation/quantization based on load. Either way, both affect the quality of your responses.
It's worth checking different versions of Claude Code, and updating your tools if you don't do it automatically. Also run the same prompts through VS Code, Cursor, Claude Code in terminal, etc. You can get very different model responses based on the system prompt, what context is passed via the harness, how the rules are loaded and all sorts of minor tweaks.
If you make raw API calls and see behavioural changes over time, that would be another concern.
reply
Someone1234 13 hours ago | root | parent | prev | next [–]
Specifically including routing (i.e. which model you route to based on load/ToD)?
PS - I appreciate you coming here and commenting!
reply
hhh 13 hours ago | root | parent | next [–]
There is no routing with API, or when you choose a specific model in chatGPT.
reply
zwaps 3 hours ago | root | parent | next [–]
In the past it seemed there was routing based on context-length. So the model was always the same, but optimized for different lengths. Is this still the case?
reply
derwiki 9 hours ago | root | parent | prev | next [–]
Has this always been the case?
reply
fragmede 10 hours ago | root | parent | prev | next [–]
I believe you when you say you're not changing the model file loaded onto the H100s or whatever, but there's something going on, beyond just being slower, when the GPUs are heavily loaded.
reply
clbrmbr 9 hours ago | root | parent | next [–]
I do wonder about reasoning effort.
reply
hauntsaninja 1 hour ago | root | parent | next [–]
Reasoning effort is denominated in tokens, not time, so no difference beyond slowness at heavy load
(I work at OpenAI)
reply
Corence 14 hours ago | root | parent | prev | next [–]
It is a fair question. I'd expect the numbers are all real. Competitors are going to rerun the benchmark with these models to see how the model is responding and succeeding on the tasks and use that information to figure out how to improve their own models. If the benchmark numbers aren't real their competitors will call out that it's not reproducible.
However it's possible that consumers without a sufficiently tiered plan aren't getting optimal performance, or that the benchmark is overfit and the results won't generalize well to the real tasks you're trying to do.
reply
mrandish 11 hours ago | root | parent | next [–]
> I'd expect the numbers are all real.
I think a lot of people are concerned due to 1) significant variance in performance being reported by a large number of users, and 2) We have specific examples of OpenAI and other labs benchmaxxing in the recent past (https://grok.com/share/c2hhcmQtMw_66c34055-740f-43a3-a63c-4b...).
It's tricky because there are so many subtle ways in which "the numbers are all real" could be technically true in some sense, yet still not reflect what a customer will experience (eg harnesses, etc). And any of those ways can benefit the cost structures of companies currently subsidizing models well below their actual costs with limited investor capital. All with billions of dollars in potential personal wealth at stake for company employees and dozens of hidden cost/performance levers at their disposal.
And it doesn't even require overt deception on anyone's part. For example, the teams doing benchmark testing of unreleased new models aren't the same people as the ops teams managing global deployment/load balancing at scale day-to-day. If there aren't significant ongoing resources devoted to specifically validating those two things remain in sync - they'll almost certainly drift apart. And it won't be anyone's job to even know it's happening until a meaningful number of important customers complain or sales start to fall. Of course, if an unplanned deviation causes costs to rise over budget, it's a high-priority bug to be addressed. But if the deviation goes the other way and costs are little lower than expected, no one's getting a late night incident alert. This isn't even a dig at OpenAI in particular, it's just the default state of how large orgs work.
reply
ifwinterco 14 hours ago | root | parent | prev | next [–]
On benchmarks GPT 5.2 was roughly equivalent to Opus 4.5 but most people who've used both for SWE stuff would say that Opus 4.5 is/was noticeably better
reply
CraigJPerry 13 hours ago | root | parent | next [–]
There's an extended thinking mode for GPT 5.2 i forget the name of it right at this minute. It's super slow - a 3 minute opus 4.5 prompt is circa 12 minutes to complete in 5.2 on that super extended thinking mode but it is not a close race in terms of results - GPT 5.2 wins by a handy margin in that mode. It's just too slow to be useable interactively though.
reply
ifwinterco 12 hours ago | root | parent | next [–]
Interesting, sounds like I definitely need to give the GPT models another proper go based on this discussion
reply
elAhmo 13 hours ago | root | parent | prev | next [–]
I mostly used Sonnet/Opus 4.x in the past months, but 5.2 Codex seemed to be on par or better for my use case in the past month. I tried a few models here and there but always went back to Claude, but with 5.2 Codex for the first time I felt it was very competitive, if not better.
Curious to see how things will be with 5.3 and 4.6
reply
georgeven 13 hours ago | root | parent | prev | next [–]
Interesting. Everyone in my circle said the opposite.
reply
MadnessASAP 10 hours ago | root | parent | next [–]
My experience is that Codex follows directions better but Claude writes better code.
ChatGPT-5.2-Codex follows directions to ensure a task [bead](https://github.com/steveyegge/beads) is opened before starting a task and to keep it updated almost to a fault. Claude-Opus-4.5 with the exact same directions, forgets about it within a round or two. Similarly, I had a project that required very specific behaviour from a couple functions, it was documented in a few places including comments at the top and bottom of the function. Codex was very careful in ensuring the function worked as was documented. Claude decided it was easier to do the exact opposite, rewrote the function, the comments, and the documentation to saynit now did the opposite of what was previously there.
If I believed a LLM could be spiteful, I would've believed it on that second one. I certainly felt some after I realised what it had done. The comment literally said:
// Invariant regardless of the value of X, this function cannot return Y
And it turned it into:
// Returns Y if X is true
reply
planckscnst 9 hours ago | root | parent | next [–]
That's so strange. I found GPT to be abysmal at following instructions to the point of unusability for any direction-heavy role. I have a common workflow that involves an orchestrator that pretty much does nothing but follow some simple directions [1]. GPT flat-out cannot do this most basic task.
[1]: https://github.com/Vibecodelicious/llm-conductor/blob/main/O...
reply
MadnessASAP 7 hours ago | root | parent | next [–]
Strange behaviour and LLMs are the iconic duo of the decade. They've definitley multiplied my productivity, since now instead of putting off writing boring code or getting stuck on details till I get frustrated and give up I just give it to an agent to figure out.
I don't thing my ability read, understand, and write code is going anywhere though.
Neat tool BTW, I'm in the market for something like that.
reply
planckscnst 7 minutes ago | root | parent | next [–]
I've found this orchestrator+reviewer+judge setup to yield much better results than anything else I've tried. And it's such a simple setup - a few markdown files.
I'm also creating one that is similar, but purpose-built for making the plans that this setup can orchestrate. It still needs some tweaking to get agents to follow it better - it still takes additional prompting to nudge it down the proper path. But I've had similar benefits - sending plans through this adversarial review loop has yielded significant improvements in final output.
https://github.com/Vibecodelicious/llm-conductor/blob/main/p...
krzyk 13 hours ago | root | parent | prev | next [–]
It probably depends on programming language and expectations.
reply
ifwinterco 12 hours ago | root | parent | next [–]
This is mostly Python/TS for me... what Jonathan Blow would probably call not "real programming" but it pays the bills
They can both write fairly good idiomatic code but in my experience opus 4.5 is better at understanding overall project structure etc. without prompting. It just does things correctly first time more often than codex. I still don't trust it obviously but out of all LLMs it's the closest to actually starting to earn my trust
reply
deaux 7 hours ago | root | parent | prev | next [–]
Even for the same language it depends on domain.
reply
SatvikBeri 11 hours ago | root | parent | prev | next [–]
I pretty consistently heard people say Codex was much slower but produced better results, making it better for long-running work in the background, and worse for more interactive development.
reply
int_19h 54 minutes ago | root | parent | next [–]
Codex is also much less transparent about its reasoning. With Claude, you see a fairly detailed chain-of-thought, so you can intervene early if you notice the model veering in the wrong direction or going in circles.
reply
smcleod 12 hours ago | root | parent | prev | next [–]
I don't think much from OpenAI can be trusted tbh.
reply
aaaalone 14 hours ago | root | parent | prev | next [–]
At the end of the day you test it for your use cases anyway but it makes it a great initial hint if it's worth it to test out.
reply
cyanydeez 14 hours ago | root | parent | prev | next [–]
When do you think we should run this benchmark? Friday, 1pm? Monday 8AM? Wednesday 11AM?
I definitely suspect all these models are being degraded during heavy loads.
reply
j_maffe 14 hours ago | root | parent | next [–]
This hypothesis is tested regularly by plenty of live benchmarks. The services usually don't decay in performance.
reply
thinkingtoilet 12 hours ago | root | parent | prev | next [–]
We know Open AI got caught getting benchmark data and tuning their models to it already. So the answer is a hard no. I imagine over time it gives a general view of the landscape and improvements, but take it with a large grain of salt.
reply
tedsanders 8 hours ago | root | parent | next [–]
Are you referring to FrontierMath?
We had access to the eval data (since we funded it), but we didn't train on the data or otherwise cheat. We didn't even look at the eval results until after the model had been trained and selected.
reply
rvz 11 hours ago | root | parent | prev | next [–]
The same thing was done with Meta researchers with Llama 4 and what can go wrong when 'independent' researchers begin to game AI benchmarks. [0]
You always have to question these benchmarks, especially when the in-house researchers can potentially game them if they wanted to.
Which is why it must be independent.
[0] https://gizmodo.com/meta-cheated-on-ai-benchmarks-and-its-a-...
reply
purplerabbit 15 hours ago | parent | prev | next [–]
The lack of broad benchmark reports in this makes me curious: Has OpenAI reverted to benchmaxxing? Looking forward to hearing opinions once we all try both of these out
reply
MallocVoidstar 14 hours ago | root | parent | next [–]
The -codex models are only for 'agentic coding', nothing else.
reply
callamdelaney 1 hour ago | parent | prev | next [–]
Anthropic models generally are right first time for me. Chatgpt and Gemini are often way, way out with some fundamental misunderstanding of the task at hand.
reply
nharada 15 hours ago | parent | prev | next [–]
That's a massive jump, I'm curious if there's a materially different feeling in how it works or if we're starting to reach the point of benchmark saturation. If the benchmark is good then 10 points should be a big improvement in capability...
reply
jkelleyrtp 15 hours ago | parent | prev | next [–]
claude swe-bench is 80.8 and codex is 56.8
Seems like 4.6 is still all-around better?
reply
gizmodo59 15 hours ago | root | parent | next [–]
Its SWE bench pro not swe bench verified. The verified benchmark has stagnated
reply
joshuahedlund 15 hours ago | root | parent | next [–]
Any ideas why verified has stagnated? It was increasing rapidly and then basically stopped.
reply
Snuggly73 15 hours ago | root | parent | next [–]
it has been pretty much a benchmark for memorization for a while. there is a paper on the subject somewhere.
swe bench pro public is newer, but its not live, so it will get slowly memorized as well. the private dataset is more interesting, as are the results there:
https://scale.com/leaderboard/swe_bench_pro_private
reply
Rudybega 12 hours ago | root | parent | prev | next [–]
You're comparing two different benchmarks. Pro vs Verified.
reply
pjot 15 hours ago | prev | next [–]
Claude Code release notes:
> Version 2.1.32:
• Claude Opus 4.6 is now available!
• Added research preview agent teams feature for multi-agent collaboration (token-intensive feature, requires setting
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1)
• Claude now automatically records and recalls memories as it works
• Added "Summarize from here" to the message selector, allowing partial conversation summarization.
• Skills defined in .claude/skills/ within additional directories (--add-dir) are now loaded automatically.
• Fixed @ file completion showing incorrect relative paths when running from a subdirectory
• Updated --resume to re-use --agent value specified in previous conversation by default.
• Fixed: Bash tool no longer throws "Bad substitution" errors when heredocs contain JavaScript template literals like ${index + 1}, which
previously interrupted tool execution
• Skill character budget now scales with context window (2% of context), so users with larger context windows can see more skill descriptions
without truncation
• Fixed Thai/Lao spacing vowels (สระ า, ำ) not rendering correctly in the input field
• VSCode: Fixed slash commands incorrectly being executed when pressing Enter with preceding text in the input field
• VSCode: Added spinner when loading past conversations list
reply
neuronexmachina 15 hours ago | parent | next [–]
> Claude now automatically records and recalls memories as it works
Neat: https://code.claude.com/docs/en/memory
I guess it's kind of like Google Antigravity's "Knowledge" artifacts?
reply
bityard 14 hours ago | root | parent | next [–]
If it works anything like the memories on Copilot (which have been around for quite a while), you need to be pretty explicit about it being a permanent preference for it to be stored as a memory. For example, "Don't use emoji in your response" would only be relevant for the current chat session, whereas this is more sticky: "I never want to see emojis from you, you sub-par excuse for a roided-out spreadsheet"
reply
9dev 13 hours ago | root | parent | next [–]
> you sub-par excuse for a roided-out spreadsheet
That’s harsh, man.
reply
flutas 12 hours ago | root | parent | prev | next [–]
It's a lot more iffy than that IME.
It's very happy to throw a lot into the memory, even if it doesn't make sense.
reply
anupamchugh 4 hours ago | root | parent | next [–]
This is the core problem. The agent writes its own memory while working, so it has blind spots about what matters. I've had sessions where it carefully noted one thing but missed a bigger mistake in the same conversation — it can't see its own gaps.
A second pass over the transcript afterward catches what the agent missed. Doesn't need the agent to notice anything. Just reads the conversation cold.
The two approaches have completely different failure modes, which is why you need both. What nobody's built yet is the loop where the second pass feeds back into the memory for the next session.
reply
om8 15 hours ago | root | parent | prev | next [–]
Is there a way to disable it? Sometimes I value agent not having knowledge that it needs to cut corners
reply
nerdsniper 14 hours ago | root | parent | next [–]
90-98% of the time I want the LLM to only have the knowledge I gave it in the prompt. I'm actually kind of scared that I'll wake up one day and the web interface for ChatGPT/Opus/Gemini will pull information from my prior chats.
reply
vineyardmike 12 hours ago | root | parent | next [–]
All these of these providers support this feature. I don’t know about ChatGPT but the rest are opt-in. I imagine with Gemini it’ll be default on soon enough, since it’s consumer focused. Claude does constantly nag me to enable it though.
reply
pdntspa 12 hours ago | root | parent | prev | next [–]
They already do this
I've had claude reference prior conversations when I'm trying to get technical help on thing A, and it will ask me if this conversation is because of thing B that we talked about in the immediate past
reply
sanxiyn 9 hours ago | root | parent | next [–]
You can disable this at Settings > Capabilities > Memory > Search and reference chats.
reply
hypercube33 14 hours ago | root | parent | prev | next [–]
I'm fairly sure OpenAI/GPT does pull prior information in the form of its memories
reply
nerdsniper 14 hours ago | root | parent | next [–]
Ah, that could explain why I've found myself using it the least.
reply
sharifhsn 14 hours ago | root | parent | prev | next [–]
Gemini has this feature but it’s opt-in.
reply
kzahel 13 hours ago | root | parent | prev | next [–]
Claude told me he can disable it by putting instructions in the MEMORY.md file to not use it. So only a soft disable AFAIK and you'd need to do it on each machine.
reply
jsw97 1 hour ago | root | parent | next [–]
I ran into this yesterday and disabled it by changing permissions on the project’s memory directory. Claude was unable to advise me on how to disable. You could probably write a global hook for this. Gross though.
reply
4b11b4 12 hours ago | root | parent | prev | next [–]
I understand everyone's trying to solve this problem but I'm envisioning 1 year down the line when your memory is full of stuff that shouldn't be in there.
reply
codethief 15 hours ago | root | parent | prev | next [–]
Are we sure the docs page has been updated yet? Because that page doesn't say anything about automatic recording of memories.
reply
neuronexmachina 14 hours ago | root | parent | next [–]
Oh, quite right. I saw people mention MEMORY.md online and I assumed that was the doc for it, but it looks like it isn't.
reply
pdntspa 12 hours ago | root | parent | prev | next [–]
I thought it was already doing this?
I asked Claude UI to clear its memory a little while back and hoo boy CC got really stupid for a couple of days
reply
kzahel 13 hours ago | root | parent | prev | next [–]
I looked into it a bit. It stores memories near where it stores JSONL session history. It's per-project (and specific to the machine) Claude pretty aggressively and frequently writes stuff in there. It uses MEMORY.md as sort of the index, and will write out other files with other topics (linking to them from the main MEMORY.md) file.
It gives you a convenient way to say "remember this bug for me, we should fix tomorrow". I'll be playing around with it more for sure.
I asked Claude to give me a TLDR (condensed from its system prompt):
----
Persistent directory at ~/.claude/projects/{project-path}/memory/, persists across conversations
MEMORY.md is always injected into the system prompt; truncated after 200 lines, so keep it concise
Separate topic files for detailed notes, linked from MEMORY.md What to record: problem constraints, strategies that worked/failed, lessons learned
Proactive: when I hit a common mistake, check memory first - if nothing there, write it down
Maintenance: update or remove memories that are wrong or outdated
Organization: by topic, not chronologically
Tools: use Write/Edit to update (so you always see the tool calls)
reply
ra7 11 hours ago | root | parent | next [–]
> Persistent directory at ~/.claude/projects/{project-path}/memory/, persists across conversations
I create a git worktree, start Claude Code in that tree, and delete after. I notice each worktree gets a memory directory in this location. So is memory fragmented and not combined for the "main" repo?
reply
vardalab 8 hours ago | root | parent | next [–]
Yes, I noticed the same thing, and Claude told me that it's going to be deleted. I will have it improve the skill that is part of our worktree cleanup process to consolidate that memory into the main memory if there's anything useful.
reply
blueblisters 11 minutes ago | prev | next [–]
I know most people feel 5.2 is a better coding model but Opus has come in handy several times when 5.2 was stuck, especially for more "weird" tasks like debugging a VIO algorithm.
5.2 (and presumably 5.3) is really smart though and feels like it has higher "raw" intelligence.
Opus feels like a better model to talk to, and does a much better job at non-coding tasks especially in the Claude Desktop app.
Here's an example prompt where Opus in Claude put in a lot more effort and did a better job than GPT5.2 Thinking in ChatGPT:
`find all the pure software / saas stocks on the nyse/nasdaq with at least $10B of market cap. and give me a breakdown of their performance over the last 2 years, 1 year and 6 months. Also find their TTM and forward PE`
Opus usage limits are a bummer though and I am conditioned to reach for Codex/ChatGPT for most trivial stuff.
Works out in Anthropic's favor, as long as I'm subscribed to them.
reply
legitster 15 hours ago | prev | next [–]
I'm still not sure I understand Anthropic's general strategy right now.
They are doing these broad marketing programs trying to take on ChatGPT for "normies". And yet their bread and butter is still clearly coding.
Meanwhile, Claude's general use cases are... fine. For generic research topics, I find that ChatGPT and Gemini run circles around it: in the depth of research, the type of tasks it can handle, and the quality and presentation of the responses.
Anthropic is also doing all of these goofy things to try to establish the "humanity" of their chatbot - giving it rights and a constitution and all that. Yet it weirdly feels the most transactional out of all of them.
Don't get me wrong, I'm a paying Claude customer and love what it's good at. I just think there's a disconnect between what Claude is and what their marketing department thinks it is.
reply
bobbylarrybobby 4 hours ago | parent | next [–]
I really like that Claude feels transactional. It answers my question quickly and concisely and then shuts up. I don't need the LLM I use to act like my best friend.
reply
andkenneth 3 hours ago | root | parent | next [–]
Weirdly I feel like partially because of this it feels more "human" and more like a real person I'm talking to. GPT models feel fake and forced, and will yap in a way that is like they're trying to get to be my friend, but offputting in a way that makes it not work. Meanwhile claude has always had better "emotional intelligence".
Claude also seems a lot better at picking up what's going on. If you're focused on tasks, then yeah, it's going to know you want quick answers rather than detailed essays. Could be part of it.
reply
apples_oranges 1 hour ago | root | parent | prev | next [–]
fyi in settings, you can configure chatGPT to do the same
reply
matkoniecz 1 hour ago | root | parent | next [–]
where?
reply
maxbond 26 minutes ago | root | parent | next [–]
Settings > Personalization > Custom Instructions.
Here's what I use:
WE ARE PROFESSIONALS. DO NOT FLATTER ME. BE BLUNT AND FORTHRIGHT.
reply
cryptoegorophy 3 hours ago | root | parent | prev | next [–]
Then why are they advertising to people that are complete opposite of you? Why couldn’t they just … ask LLM what their target audience is?
reply
tsss 48 minutes ago | root | parent | prev | next [–]
Quickly and concisely? In my experience, Claude drivels on and on forever. The answers are always far longer than Gemini's, which is mostly fine for coding but annoying for planning/questions.
reply
tgtweak 15 hours ago | parent | prev | next [–]
Claude itself (outside of code workflows) actually works very well for general purpose chat. I have a few non-technical friends that have moved over from chatgpt after some side-by-side testing and I've yet to see one go back - which is good since claude circa 8 months ago was borderline unusable for anything but coding on the api.
reply
int_19h 51 minutes ago | parent | prev | next [–]
I suspect it very much depends on the "generic research topics", but in my experience one thing that Claude is good at is in-depth research because it can keep going for such a long time; I've had research sessions go well over an hour, producing very detailed reports with lots of sources etc. Gemini Deep Research is nowhere even close.
reply
Squarex 13 hours ago | parent | prev | next [–]
Claude sucks at non English languages. Gemini and ChatGPT are much better. Grok is the worst. I am a native Czech speaker and Claude makes up words and Grok sometimes respond in Russian. So while I love it for coding, it’s unusable for general purpose for me.
reply
9dev 12 hours ago | root | parent | next [–]
> Grok sometimes respond in Russian
Geopolitically speaking this is hilarious.
reply
Squarex 12 hours ago | root | parent | next [–]
The voice mode sounded like a Ukrainian trying to speak Czech. I don’t think it means anything.
reply
deaux 6 hours ago | root | parent | prev | next [–]
You mean Claude sucks at Czech. You're extrapolating here. I can name languages that Claude is better at than GPT.
Gemini is the most fluent in the highest number of human languages and has been for years (!) at this point - namely since Gemini 1.5 Pro, which was released Feb 2024. Two years ago.
reply
Squarex 4 hours ago | root | parent | next [–]
Yeah, sure, I was overly generalising it from one experience.
reply
jorl17 11 hours ago | root | parent | prev | next [–]
Claude is quite good at European Portuguese in my limited tests. Gemini 3 is also very good. ChatGPT is just OK and keeps code-switching all the time, it's very bizarre.
I used to think of Gemini as the lead in terms of Portuguese, but recently subjectively started enjoying Claude more (even before Opus 4.5).
In spite of this, ChatGPT is what I use for everyday conversational chat because it has loads of memories there, because of the top of the line voice AI, and, mostly, because I just brainstorm or do 1-off searches with it. I think effectively ChatGPT is my new Google and first scratchpad for ideas.
reply
kuboble 12 hours ago | root | parent | prev | next [–]
Claude code (opus) is very good in Polish.
I sometimes vibe code in polish and it's as good as with English for me. It speaks a natural, native level Polish.
I used opus to translate thousands of strings in my app into polish, Korean, and two Chinese dialects. Polish one is great, and the other are also good according to my customers.
reply
Squarex 4 hours ago | root | parent | next [–]
> I sometimes vibe code in polish
This is interesting to me. I always switch to English automatically when using Claude Code as I have learned software engineering on an English speaking Internet. Plus the muscle memory of having to query google in English.
reply
altern8 9 hours ago | root | parent | prev | next [–]
Your game is amazing!
I wish there was a "Reset" button to go back to the original position.
Where are you in Poland?
reply
kuboble 5 hours ago | root | parent | next [–]
Thanks :) Click "Level" -> "Try again"
Originally from Wrocław, but don't live in Poland anymore
reply
altern8 4 minutes ago | root | parent | next [–]
Ah, I'm originally from Italy and living in Wroclaw now, LOL.
BUT, I meant a button to restart after a few moves. Anyways, cool!
koakuma-chan 7 hours ago | root | parent | prev | next [–]
You could say its Polish is polished.
reply
khendron 8 hours ago | root | parent | prev | next [–]
Claude is helping me learn French right now. I am using it as a supplementary tutor for a class I am taking. I have caught it in a couple of mistakes, but generally it seems to be working pretty well.
reply
eaf7e281 15 hours ago | parent | prev | next [–]
I kinda agree. Their model just doesn't feel "daily" enough. I would use it for any "agentic" tasks and for using tools, but definitely not for day to day questions.
reply
lukebechtel 15 hours ago | root | parent | next [–]
Why? I use it for all and love it.
That doesn't mean you have to, but I'm curious why you think it's behind in the personal assistant game.
reply
legitster 15 hours ago | root | parent | next [–]
I have three specific use cases where I try both but ChatGPT wins:
- Recipes and cooking: ChatGPT just has way more detailed and practical advice. It also thinks outside of the box much more, whereas Claude gets stuck in a rut and sticks very closely to your prompt. And ChatGPT's easier to understand/skim writing style really comes in useful.
- Travel and itinerary: Again, ChatGPT can anticipate details much more, and give more unique suggestions. I am much more likely to find hidden gems or get good time-savers than Claude, which often feels like it is just rereading Yelp for you.
- Historical research: ChatGPT wins on this by a mile. You can tell ChatGPT has been trained on actual historical texts and physical books. You can track long historical trends, pull examples and quotes, and even give you specific book or page(!) references of where to check the sources. Meanwhile, all Claude will give you is a web search on the topic.
reply
aggie 14 hours ago | root | parent | next [–]
How does #3 square with Anthropic's literal warehouse full of books we've seen from the copyright case? Did OpenAI scan more books? Or did they take a shadier route of training on digital books despite copyright issues, but end up with a deeper library?
reply
legitster 11 hours ago | root | parent | next [–]
I have no idea, but I suspect there's a difference between using books to train an LLM and be able to reproduce text/writing styles, and being able to actually recall knowledge in said books.
reply
rolisz 13 hours ago | root | parent | prev | next [–]
I think they bought the books after they were caught that they pirated the books and lost that case (because they pirated, not because of copyright).
reply
eaf7e281 12 hours ago | root | parent | prev | next [–]
It's hard to say. Maybe it has to do with the way Claude responds or the lack of "thinking" compared to other models. I personally love Claude and it's my only subscription right now, but it just feels weird compared to the others as a personal assistant.
reply
lukebechtel 9 hours ago | root | parent | next [–]
Oh, I always use opus 4.5 thinking mode. Maybe that's the diff.
reply
FergusArgyll 9 hours ago | root | parent | prev | next [–]
My 2 cents:
All the labs seem to do very different post training. OpenAI focuses on search. If it's set to thinking, it will search 30 websites before giving you an answer. Claude regularly doesn't search at all even for questions it obviously should. It's postraining seems more focused on "reasoning" or planning - things that would be useful in programming where the bottleneck is: just writing code without thinking how you'll integrate it later and search is mostly useless. But for non coding - day to day "what's the news with x" "How to improve my bread" "cheap tasty pizza" or even medical questions, you really just want a distillation of the internet plus some thought
reply
solarkraft 15 hours ago | root | parent | prev | next [–]
But that’s what makes it so powerful (yeah, mixing model and frontend discussion here yet again). I have yet to see a non-DIY product that can so effortlessly call tens of tools by different providers to satisfy your request.
reply
quietsegfault 11 hours ago | root | parent | prev | next [–]
Claude is far superior for daily chat. I have to work hard to get it to not learn how to work around various bad behaviors I have but don’t want to change.
reply
dimgl 8 hours ago | parent | prev | next [–]
I don't get what's so difficult to understand. They have ambitions beyond just coding. And Claude is generally a good LLM. Even beyond just the coding applications.
reply
redox99 9 hours ago | parent | prev | next [–]
Why would I even use Claude for asking something on their web, considering that chips away my claude code usage limit?
Their limit system is so bad.
reply
derwiki 9 hours ago | parent | prev | next [–]
It feels very similar to how Lyft positioned themselves against Uber. (And we know how that played out)
reply
fnordpiglet 4 hours ago | parent | prev | next [–]
Enterprise, government, and regulated institutions. It’s also defacto standard for programming assistants at most places. They have a better story around compliance, alignment, task based inference, agentic workflows, etc. Their retail story is meh, but I think their view is to be the aws of LLMs while OpenAI can be the retail and Gemini the whatever Google does with products.
reply
dev1ycan 2 hours ago | parent | prev | next [–]
Their "constitution" is just garbage meant to defend them ripping off copyrighted material with the excuse that "it's not plagiarizing, it thinks!!!!1" which is, false.
reply
handoflixue 2 hours ago | root | parent | next [–]
I don't recall them ever offering that legal reasoning - I'm sure you can provide a citation?
reply
simonw 16 hours ago | prev | next [–]
The bicycle frame is a bit wonky but the pelican itself is great: https://gist.github.com/simonw/a6806ce41b4c721e240a4548ecdbe...
reply
stkai 15 hours ago | parent | next [–]
Would love to find out they're overfitting for pelican drawings.
reply
fdeage 10 hours ago | root | parent | next [–]
OpenAI claims not to: https://x.com/aidan_mclau/status/1986255202132042164
reply
andy_ppp 14 hours ago | root | parent | prev | next [–]
Yes, Racoon on a unicycle? Magpie on a pedalo?
reply
throw310822 12 hours ago | root | parent | next [–]
Correct horse battery staple:
https://claude.ai/public/artifacts/14a23d7f-8a10-4cde-89fe-0...
reply
ta988 12 hours ago | root | parent | next [–]
no staple?
reply
iwontberude 11 hours ago | root | parent | next [–]
it looks like a bodge wire
reply
_kb 11 hours ago | root | parent | prev | next [–]
Platypus on a penny farthing.
reply
theanonymousone 12 hours ago | root | parent | prev | next [–]
Even if not intentionally, it is probably leaking into training sets.
reply
fragmede 14 hours ago | root | parent | prev | next [–]
The estimation I did 4 months ago:
> there are approximately 200k common nouns in English, and then we square that, we get 40 billion combinations. At one second per, that's ~1200 years, but then if we parallelize it on a supercomputer that can do 100,000 per second that would only take 3 days. Given that ChatGPT was trained on all of the Internet and every book written, I'm not sure that still seems infeasible.
https://news.ycombinator.com/item?id=45455786
reply
eli 13 hours ago | root | parent | next [–]
How would you generate a picture of Noun + Noun in the first place in order to train the LLM with what it would look like? What's happening during that 1 estimated second?
reply
metalliqaz 12 hours ago | root | parent | next [–]
its pelicans all the way down
reply
Terretta 13 hours ago | root | parent | prev | next [–]
This is why everyone trains their LLM on another LLM. It's all about the pelicans.
reply
AnimalMuppet 12 hours ago | root | parent | prev | next [–]
But you need to also include the number of prepositions. "A pelican on a bicycle" is not at all the same as "a pelican inside a bicycle".
There are estimated to be 100 or so prepositions in English. That gets you to 4 trillion combinations.
reply
gcanyon 15 hours ago | parent | prev | next [–]
One aspect of this is that apparently most people can't draw a bicycle much better than this: they get the elements of the frame wrong, mess up the geometry, etc.
reply
arionmiles 13 hours ago | root | parent | next [–]
There's a research paper from the University of Liverpool, published in 2006 where researchers asked people to draw bicycles from memory and how people overestimate their understanding of basic things. It was a very fun and short read.
It's called "The science of cycology: Failures to understand how everyday objects work" by Rebecca Lawson.
https://link.springer.com/content/pdf/10.3758/bf03195929.pdf
reply
devilcius 11 hours ago | root | parent | next [–]
There’s also a great art/design project about exactly this. Gianluca Gimini asked hundreds of people to draw a bicycle from memory, and most of them got the frame, proportions, or mechanics wrong. https://www.gianlucagimini.it/portfolio-item/velocipedia/
reply
rcxdude 13 hours ago | root | parent | prev | next [–]
A place I worked at used it as part of an interview question (it wasn't some pass/fail thing to get it 100% correct, and was partly a jumping off point to a different question). This was in a city where nearly everyone uses bicycles as everyday transportation. It was surprising how many supposedly mechanical-focused people who rode a bike everyday, even rode a bike to the interview, would draw a bike that would not work.
reply
gcanyon 12 hours ago | root | parent | next [–]
I wish I had interviewed there. When I first read that people have a hard time with this I immediately sat down without looking at a reference and drew a bicycle. I could ace your interview.
reply
throwuxiytayq 12 hours ago | root | parent | prev | next [–]
This is why at my company in interviews we ask people to draw a CPU diagram. You'd be surprised how many supposedly-senior computer programmers would draw a processor that would not work.
reply
niobe 12 hours ago | root | parent | next [–]
If I was asked that question in an interview to be a programmer I'd walk out. How many abstraction layers either side of your knowledge domain do you need to be an expert in? Further, being a good technologist of any kind is not about having arcane details at the tip of your frontal lobe, and a company worth working for would know that.
reply
duped 9 hours ago | root | parent | next [–]
I mean gp is clearly a joke but
A fundamental part of the job is being able to break down problems from large to small, reason about them, and talk about how you do it, usually with minimal context or without deep knowledge in all aspects of what we do. We're abstraction artists.
That question wouldn't be fundamentally different than any other architecture question. Start by drawing big, hone in on smaller parts, think about edge cases, use existing knowledge. Like bread and butter stuff.
I much more question your reaction to the joke than using it as a hypothetical interview question. I actually think it's good. And if it filters out people that have that kind of reaction then it's excellent. No one wants to work with the incurious.
reply
niobe 6 hours ago | root | parent | next [–]
If it was framed as "show us how you would break down this problem and think about it" then sure. If it's the gotcha quiz (much more common in my experience) then no.
But if that's what they were going for it should be something on a completely different and more abstract topic like "develop a method for emptying your swimming pool without electricity in under four hours"
reply
kortilla 5 hours ago | root | parent | prev | next [–]
It has nothing to do with “incurious”. Being asked to draw the architecture for something that is abstracted away from your actual job is a dickhead move because it’s just a test for “do you have the same interests as me?”
It’s no different than asking for the architecture of the power supply or the architecture of the network switch that serves the building. Brilliant software engineers are going to have gaps on non-software things.
reply
selcuka 9 hours ago | root | parent | prev | next [–]
Poe's Law [1]:
> Without a clear indicator of the author's intent, any parodic or sarcastic expression of extreme views can be mistaken by some readers for a sincere expression of those views.
[1] https://en.wikipedia.org/wiki/Poe%27s_law
reply
gedy 12 hours ago | root | parent | prev | next [–]
That's reasonable in many cases, but I've had situations like this for senior UI and frontend positions, and they: don't ask UI or frontend questions. And ask their pet low level questions. Some even snort that it's softball to ask UI questions or "they use whatever". It's like, yeah no wonder your UI is shit and now you are hiring to clean it up.
reply
rsc 12 hours ago | root | parent | prev | next [–]
Raises hand.
reply
gnatolf 14 hours ago | root | parent | prev | next [–]
Absolutely. A technically correct bike is very hard to draw in SVG without going overboard in details
reply
falloutx 14 hours ago | root | parent | next [–]
Its not. There are thousands of examples on the internet but good SVG sites do have monetary blocks.
https://www.freepik.com/free-photos-vectors/bicycle-svg
reply
jefftk 12 hours ago | root | parent | next [–]
Several of those have incorrect frames:
https://www.freepik.com/free-vector/cyclist_23714264.htm
https://www.freepik.com/premium-vector/bicycle-icon-black-li...
Or missing/broken pedals:
https://www.freepik.com/premium-vector/bicycle-silhouette-ic...
https://www.freepik.com/premium-vector/bicycle-silhouette-ve...
http://freepik.com/premium-vector/bicycle-silhouette-vector-...
reply
gnatolf 12 hours ago | root | parent | prev | next [–]
From smaller to larger nitpick, there's basically something wrong with all of the first 15 or so of these drawings. Thanks for agreeing :)
reply
RussianCow 13 hours ago | root | parent | prev | next [–]
I'm not positive I could draw a technically correct bike with pen and paper (without a reference), let alone with SVG!
reply
nateglims 13 hours ago | root | parent | prev | next [–]
I just had an idea for an RLVR startup.
reply
cyanydeez 14 hours ago | root | parent | prev | next [–]
Yes, but obviously AGI will solve this by, _checks notes_ more TerraWatts!
reply
hackernudes 14 hours ago | root | parent | next [–]
The word is terawatts unless you mean earth-based watts. OK then, it's confirmed, data centers in space!
reply
seanhunter 14 hours ago | root | parent | prev | next [–]
…in space!
reply
franze 13 hours ago | parent | prev | next [–]
here the animated version https://claude.ai/public/artifacts/3db12520-eaea-4769-82be-7...
reply
gryfft 13 hours ago | root | parent | next [–]
That's hilarious. It's so close!
reply
einrealist 15 hours ago | parent | prev | next [–]
They trained for it. That's the +0.1!
reply
eaf7e281 15 hours ago | parent | prev | next [–]
There's no way they actually work on training this.
reply
margalabargala 15 hours ago | root | parent | next [–]
I suspect they're training on this.
I asked Opus 4.6 for a pelican riding a recumbent bicycle and got this.
https://i.imgur.com/UvlEBs8.png
reply
WarmWash 15 hours ago | root | parent | next [–]
It would be way way better if they were benchmaxxing this. The pelican in the image (both images) has arms. Pelicans don't have arms, and a pelican riding a bike would use it's wings.
reply
ryandrake 14 hours ago | root | parent | next [–]
Having briefly worked in the 3D Graphics industry, I don't even remotely trust benchmarks anymore. The minute someone's benchmark performance becomes a part of the public's purchasing decision, companies will pull out every trick in the book--clean or dirty--to benchmaxx their product. Sometimes at the expense of actual real-world performance.
reply
seanhunter 14 hours ago | root | parent | prev | next [–]
Pelicans don’t ride bikes. You can’t have scruples about whether or not the image of a pelican riding a bike has arms.
reply
jevinskie 14 hours ago | root | parent | next [–]
Wouldn’t any decent bike-riding pelican have a bike tailored to pelicans and their wings?
reply
actsasbuffoon 12 hours ago | root | parent | next [–]
Sure, that’s one solution. You could also Isle of Dr Moreau your way to a pelican that can use a regular bike. The sky is the limit when you have no scruples.
reply
cinntaile 13 hours ago | root | parent | prev | next [–]
Now that would be a smart chat agent.
reply
TheDong 3 hours ago | root | parent | prev | next [–]
I don't think that really proves anything, it's unsurprising that recumbent bicycles are represented less in the training data and so it's less able to produce them.
Try something that's roughly equally popular, like a Turkey riding a Scooter, or a Yak driving a Tractor.
reply
mrandish 15 hours ago | root | parent | prev | next [–]
Interesting that it seems better. Maybe something about adding a highly specific yet unusual qualifier focusing attention?
reply
riffraff 14 hours ago | root | parent | prev | next [–]
perhaps try a penny farthing?
reply
KeplerBoy 15 hours ago | root | parent | prev | next [–]
There is no way they are not training on this.
reply
collinmanderson 15 hours ago | root | parent | next [–]
I suspect they have generic SVG drawing that they focus on.
reply
fragmede 14 hours ago | root | parent | prev | next [–]
The people that work at Anthropic are aware of simonw and his test, and people aren't unthinking data-driven machines. How valid his test is or isn't, a better score on it is convincing. If it gets, say, 1,000 people to use Claude Code over Codex, how much would that be worth to Anthropic?
$200 * 1,000 = $200k/month.
I'm not saying they are, but to say that they aren't with such certainty, when money is on the line; unless you have some insider knowledge you'd like to share with the rest of the class, it seems like an questionable conclusion.
reply
zahlman 12 hours ago | parent | prev | next [–]
Do you find that word choices like "generate" (as opposed to "create", "author", "write" etc.) influence the model's success?
Also, is it bad that I almost immediately noticed that both of the pelican's legs are on the same side of the bicycle, but I had to look up an image on Wikipedia to confirm that they shouldn't have long necks?
Also, have you tried iterating prompts on this test to see if you can get more realistic results? (How much does it help to make them look up reference images first?)
reply
simonw 10 hours ago | root | parent | next [–]
I've stuck with "Generate an SVG of a pelican riding a bicycle" because it's the same prompt I've been using for over a year now and I want results that are sort-of comparable to each other.
I think when I first tried this I iterated a few times to get to something that reliably output SVG, but honestly I didn't keep the notes I should ahve.
reply
etwigg 11 hours ago | parent | prev | next [–]
If we do get paperclipped, I hope it is of the "cycling pelican" variety. Thanks for your important contribution to alignment Simon!
reply
athrowaway3z 15 hours ago | parent | prev | next [–]
This benchmark inspired me to have codex/claude build a DnD battlemap tool with svg's.
They got surprisingly far, but i did need to iterate a few times to have it build tools that would check for things like; dont put walls on roads or water.
What I think might be the next obstacle is self-knowledge. The new agents seem to have picked up ever more vocabulary about their context and compaction, etc.
As a next benchmark you could try having 1 agent and tell it to use a coding agent (via tmux) to build you a pelican.
reply
hoeoek 15 hours ago | parent | prev | next [–]
This really is my favorite benchmark
reply
bityard 14 hours ago | parent | prev | next [–]
Well, the clouds are upside-down, so I don't think I can give it a pass.
reply
beemboy 13 hours ago | parent | prev | next [–]
Isn't there a point at which it trains itself on these various outputs, or someone somewhere draws one and feeds it into the model so as to pass this benchmark?
reply
nine_k 14 hours ago | parent | prev | next [–]
I suppose the pelican must be now specifically trained for, since it's a well-known benchmark.
reply
7777777phil 15 hours ago | parent | prev | next [–]
best pelican so far would you say? Or where does it rank in the pelican benchmark?
reply
mrandish 15 hours ago | root | parent | next [–]
In other words, is it a pelican or a pelican't?
reply
canadiantim 12 hours ago | root | parent | next [–]
You’ve been sitting on that pun just waiting for it to take flight
reply
nubg 15 hours ago | parent | prev | next [–]
What about the Pelo2 benchmark? (the gray bird that is not gray)
reply
6thbit 13 hours ago | parent | prev | next [–]
do you have a gif? i need an evolving pelican gif
reply
Kye 9 hours ago | root | parent | next [–]
A pelican GIF in a Pelican(TM) MP4 container.
reply
risyachka 13 hours ago | parent | prev | next [–]
Pretty sure at this point they train it on pelicans
reply
MaysonL 7 hours ago | parent | prev | next [–]
Except for both its legs being on the same side of the bike.
reply
ares623 15 hours ago | parent | prev | next [–]
Can it draw a different bird on a bike?
reply
simonw 15 hours ago | root | parent | next [–]
Here's a kākāpō riding a bicycle instead: https://gist.github.com/simonw/19574e1c6c61fc2456ee413a24528...
I don't think it quite captures their majesty: https://en.wikipedia.org/wiki/K%C4%81k%C4%81p%C5%8D
reply
zahlman 12 hours ago | root | parent | next [–]
Now that I've looked it all up, I feel like that's much more accurate to a real kākāpō than the pelican is to a real pelican. It's almost as if it thinks a pelican is just a white flamingo with a different beak.
reply
DetroitThrow 15 hours ago | parent | prev | next [–]
The ears on top are a cute touch
reply
copilot_king_2 15 hours ago | parent | prev | next [–]
I'm firing all of my developers this afternoon.
reply
RGamma 14 hours ago | root | parent | next [–]
Opus 6 will fire you instead for being too slow with the ideas.
reply
insane_dreamer 13 hours ago | root | parent | prev | next [–]
Too late. You’ve already been fired by a moltbot agent from your PHB.
reply
behnamoh 15 hours ago | parent | prev | next [5 more]
fullstackchris 12 hours ago | parent | prev | next [2 more]
blibble 16 hours ago | prev | next [–]
> We build Claude with Claude. Our engineers write code with Claude Code every day
well that explains quite a bit
reply
jsheard 16 hours ago | parent | next [–]
CC has >6000 open issues, despite their bot auto-culling them after 60 days of inactivity. It was ~5800 when I looked just a few days ago so they seem to be accelerating towards some kind of bug singularity.
reply
dkersten 13 hours ago | root | parent | next [–]
Just anecdotally, each release seems to be buggier than the last.
To me, their claim that they are vibe coding Claude code isn’t the flex they think it is.
I find it harder and harder to trust anthropic for business related use and not just hobby tinkering. Between buggy releases, opaque and often seemingly glitches rate limits and usage limits, and the model quality inconsistency, it’s just not something I’d want to bet a business on.
reply
zahlman 12 hours ago | root | parent | next [–]
I think I would be much more frightened if it were working well.
reply
ifwinterco 11 hours ago | root | parent | next [–]
Exactly, thank goodness it's still a bit rubbish in some aspects
reply
csomar 5 hours ago | root | parent | prev | next [–]
Since version 2.1.9, performance has degraded significantly after extended use. After 30-40 prompts with substantial responses, memory usage climbs above 25GB, making the tool nearly unusable. I'm updating again to see if it improves.
Unlike what another commenter suggested, this is a complex tool. I'm curious whether the codebase might eventually reach a point where it becomes unfixable; even with human assistance. That would be an interesting development. We'll see.
reply
tgtweak 15 hours ago | root | parent | prev | next [–]
plot twist, it's all claude code instances submitting bug reports on behalf of end users.
reply
trescenzi 9 hours ago | root | parent | next [–]
I literally hit a claude code bug today, tried to use claude desktop to debug it which didn't help and it offered to open a bug report for me. So yes 100%. Some of the titles also make it pretty clear they are auto submitted. This is my favorite which was around the top when I was creating my bug report 3 hours ago and is now 3 pages back lol.
> Unable to process - no bug report provided. Please share the issue details you'd like me to convert into a GitHub issue title
https://github.com/anthropics/claude-code/issues/23459
reply
accrual 15 hours ago | root | parent | prev | next [–]
It's Claude, all the way down.
reply
ericrallen 5 hours ago | root | parent | prev | next [–]
The rate of Issues opened on a popular repo is at least one order of magnitude beyond the number of Issues whoever is able to deal with them can handle.
reply
elAhmo 13 hours ago | root | parent | prev | next [–]
Insane to think that a relatively simple CLI tool has so many open issues...
reply
emilsedgh 13 hours ago | root | parent | next [–]
It's not really a simple CLI tool though it's really interactive.
reply
luckydata 54 minutes ago | root | parent | prev | next [–]
It's far from simple
reply
trymas 13 hours ago | root | parent | prev | next [–]
What’s so simple about it?
reply
elAhmo 13 hours ago | root | parent | next [–]
I said relatively simple. It is mostly an API interface with Anthropic models, with tool calling on top of it, very simple input and output.
reply
brookst 12 hours ago | root | parent | next [–]
With extensibility via plugins, MCP (stdio and http), UI to prompt the user for choices and redirection, tools to manage and view context, and on and on.
It is not at all a small app, at least as far as UX surface area. There are, what, 40ish slash commands? Each one is an opportunity for bugs and feature gaps.
reply
everforward 10 hours ago | root | parent | next [–]
I would still call that small, maybe medium. emacs is huge as far as CLI tools go, awk is large because it implements its own language (apparently capable of writing Doom in). `top` probably has a similar number of interaction points, something like `lftp` might have more between local and remote state.
The complex and magic parts are around finding contextual things to include, and I'd be curious how many are that vs "forgot to call clear() in the TUI framework before redirecting to another page".
reply
dmazzoni 5 hours ago | root | parent | prev | next [–]
Also it's highly multithreaded / multiprocess - you can run subagents that can communicate with each other, you can interrupt it while it's in the middle of thinking and it handles it gracefully without forgetting what it was doing
reply
koakuma-chan 7 hours ago | root | parent | prev | next [–]
They wouldn't have 6000 issues if they hired one or two Rust engineers.
reply
trymas 2 hours ago | root | parent | prev | next [–]
If I would get a dollar each time a developer (or CTO!) told me "this is (relatively) simple, it will take 2 days/weeks", but then it actually took 2 years+ to fully build and release a product that has more useful features than bugs...
I am not protecting anthropic[0], but how come in this forum every day I still see these "it's simple" takes from experienced people - I have no idea. There are who knows how many terminal emulators out there, with who knows how many different configurations. There are plugins for VSCode and various other editors (so it's not only TUI).
Looking at issue tracker ~1/3 of issues are seemingly feature requests[1].
Do not forget we are dealing with LLMs and it's a tool, which purpose and selling point that it codes on ANY computer in ANY language for ANY system. It's very popular tool run each day by who knows how many people - I could easily see, how such "relatively simple" tool would rack up thousands of issues, because "CC won't do weird thing X, for programming language Y, while I run from my terminal Z". And because it's LLM - theres whole can of non deterministic worms.
Have you created an LLM agent, especially with moderately complex tool usage? If yes and it worked flawlessly - tell your secrets (and get hired by Anthropic/ChatGPT/etc). Probably 80% of my evergrowing code was trying to just deal with unknown unknowns - what if LLM invokes tool wrong? How to guide LLM back on track? How to protect ourselves and keep LLM on track if prompts are getting out of hand or user tries to do something weird? The problems were endless...
Yes the core is "simple", but it's extremely deep can of worms, for such successful tool - I easily could see how there are many issues.
Also super funny, that first issue for me at the moment is how user cannot paste images when it has Korean language input (also issue description is in Korean) and second issue is about input problems in Windows Powershell and CMD, which is obviously total different world compared to POSIX (???) terminal emulators.
[0] I have very adverse feelings for mega ultra wealthy VC moneys...
[1] https://github.com/anthropics/claude-code/issues?q=is%3Aissu...
reply
vouwfietsman 1 hour ago | root | parent | next [–]
Although I understand your frustration (and have certainly been at the other side of this as well!), I think its very valuable to always verbalize your intuition of scope of work and be critical if your intuition is in conflict with reality.
Its the best way to find out if there's a mismatch between value and effort, and its the best way to learn and discuss the fundamental nature of complexity.
Similar to your argument, I can name countless of situations where developers absolutely adamantly insisted that something was very hard to do, only for another developer to say "no you can actually do that like this* and fix it in hours instead of weeks.
Yes, making a TUI from scratch is hard, no that should not affect Claude code because they aren't actually making the TUI library (I hope). It should be the case that most complexity is in the model, and the client is just using a text-based interface.
There seems to be a mismatch of what you're describing would be issues (for instance about the quality of the agent) and what people are describing as the actual issues (terminal commands don't work, or input is lost arbitrarily).
That's why verbalizing is important, because you are thinking about other complexities than the people you reply to.
reply
9dev 13 hours ago | root | parent | prev | next [–]
I’m pretty certain you haven’t used it yet(to its fullest extent) then. Claude Code is easily one of the most complex terminal UIs I have seen yet.
reply
ozozozd 4 hours ago | root | parent | next [–]
It’s extremely simple.
If that’s the most complex TUI (yeah, new acronym) you’ve seen, you have a lot to catch up on!
I am talking rendering image/video in the terminal!
reply
dvfjsdhgfv 12 hours ago | root | parent | prev | next [–]
Could you explain why? When I think about complex TUIs, I think about things we were building with Turbo Vision in the 90s.
reply
gorbypark 12 hours ago | root | parent | next [–]
I’m going to buck the trend and say it’s really not that complex. AFAIK they are using Ink, which is React with a TUI renderer.
Cue I could build it in a weekend vibes, I built my own agent TUI using the OpenAI agent SDK and Ink. Of course it’s not as fleshed out as Claude, but it supports git work trees for multi agent, slash commands, human in the loop prompts and etc. If I point it at the Anthropic models it more or less produces results as m good as the real Claude TUI.
I actually “decompiled” the Claude tools and prompts and recreated them. As of 6 months ago Claude was 15 tools, mostly pretty basic (list for, read file, wrote file, bash, etc) with some very clever prompts, especially the task tool it uses to do the quasi planning mode task bullets (even when not in planning mode).
Honestly the idea of bringing this all together with an affordable monthly service and obviously some seriously creative “prompt engineers” is the magic/hard part (and making the model itself, obviously).
reply
dwaltrip 12 hours ago | root | parent | prev | next [–]
sips coffee… ahh yes, let me find that classic Dropbox rsync comment
reply
paxys 15 hours ago | root | parent | prev | next [–]
Half of them were probably opened yesterday during the Claude outage.
reply
anematode 15 hours ago | root | parent | next [–]
Nah, it was at like 5500 before.
reply
raincole 15 hours ago | parent | prev | next [–]
It explains how important dogfooding is if you want to make an extremely successful product.
reply
jama211 15 hours ago | parent | prev | next [–]
It’s extremely successful, not sure what it explains other than your biases
reply
blibble 15 hours ago | root | parent | next [–]
Microsoft's products are also extremely successful
they're also total garbage
reply
simianwords 15 hours ago | root | parent | next [–]
but they have the advantage of already being a big company. Anthropic is new and there's no reason for people to use it
reply
kuboble 11 hours ago | root | parent | next [–]
The tool is absolutely fantastic coding assistant. That's why I use it.
The amount of non-critical bugs all over the place is at least a magnitude larger than of any software I was using daily ever.
Plenty of built in /commands don't work. Sometimes it accepts keystrokes with 1 second delays. It often scrolls hundreds of lines in console after each key stroke Every now and then it crashes completely and is unrecoverable (I once have up and installed a fresh wls) When you ask it question in plan mode it is somewhat of an art to find the answer because after answering the question it will dump the whole current plan (free screens of text)
And just in general the technical feeling of the TUI is that of a vibe coded project that got too big to control.
reply
derwiki 9 hours ago | root | parent | next [–]
I think this might be a harbinger of what we should expect for software quality in the next decade
reply
jama211 5 hours ago | root | parent | next [–]
Orrrrr it’s not
reply
Izikiel43 12 hours ago | root | parent | prev | next [–]
what about if management gives them a reason? You can think of which those can be.
reply
jama211 5 hours ago | root | parent | prev | next [–]
Well there you have it, proof you’re not being reasonable. Microsoft’s products annoy HN users but they are absolutely not total garbage. They’re highly functional and valuable and if they weren’t they truely wouldn’t be used, they’re just flawed.
reply
ed_mercer 4 hours ago | root | parent | next [–]
You should look at some Copilot reviews.
reply
holoduke 13 hours ago | root | parent | prev | next [–]
Claude is by far the most popular and best assistant currently available for a developer.
reply
wavemode 13 hours ago | root | parent | next [–]
Okay, and Windows is by far the most popular desktop operating system.
Discussions are pointless when the parties are talking past each other.
reply
dmazzoni 2 hours ago | root | parent | next [–]
Yeah, but there are dozens of AI coding assistants to choose from, and the cost to switch is very low, unlike switching operating systems.
I've tried them all and I keep coming back to Claude Code because it's just so much more capable and useful than the others.
reply
jama211 5 hours ago | root | parent | prev | next [–]
Yes, and windows is pretty good for most people. Don’t be ridiculous.
reply
pluralmonad 13 hours ago | root | parent | prev | next [–]
Popular meaning lots of people like it or that it is relatively widespread? Polio used to be popular in the latter way.
reply
quietsegfault 11 hours ago | root | parent | next [–]
I like windows, it’s fine. I like MacOS better. I like Linux. None of them are garbage or unusable.
reply
blibble 11 hours ago | root | parent | next [–]
have you used Windows 11?
file explorer takes 5 seconds to open
reply
jama211 5 hours ago | root | parent | next [–]
No it doesn’t, don’t be hyperbolic.
reply
elvin_d 5 hours ago | root | parent | prev | next [–]
might be only among most popular. https://skills.sh/ is some data point.
reply
oblio 2 hours ago | root | parent | prev | next [–]
Is it better than OpenCode?
reply
acedTrex 14 hours ago | root | parent | prev | next [–]
Something being successful and something being a high quality product with good engineering are two completely different questions.
reply
mvdtnz 15 hours ago | root | parent | prev | next [–]
Anthropic has perhaps the most embarrassing status page history I have ever seen. They are famous for downtime.
https://status.claude.com/
reply
ronsor 15 hours ago | root | parent | next [–]
As opposed to other companies which are smart enough not to report outages.
reply
tavavex 14 hours ago | root | parent | next [–]
So, there are only two types of companies: ones that have constant downtime, and ones that have constant downtime but hide it, right?
reply
Sebguer 14 hours ago | root | parent | next [–]
Basically, yes.
reply
Computer0 13 hours ago | root | parent | prev | next [–]
The competition doesn't currently have all 99's - https://status.openai.com/
reply
djeastm 12 hours ago | root | parent | prev | next [–]
The best way to use Claude's models seems to be some other inference provider (either OpenRouter or directly)
reply
derwiki 9 hours ago | root | parent | prev | next [–]
Shades of Fail Whale
reply
dimgl 15 hours ago | root | parent | prev | next [–]
And yet people still use them.
reply
cedws 14 hours ago | parent | prev | next [–]
The sandboxing in CC is an absolute joke, it's no wonder there's an explosion of sandbox wrappers at the moment. There's going to be a security catastrophe at some point, no doubt about it.
reply
gjsman-1000 16 hours ago | parent | prev | next [–]
Also explains why Claude Code is a React app outputting to a Terminal. (Seriously.)
reply
krystofbe 13 hours ago | root | parent | next [–]
I did some debugging on this today. The results are... sobering.
Memory comparison of AI coding CLIs (single session, idle):
| Tool | Footprint | Peak | Language |
|-------------|-----------|--------|---------------|
| Codex | 15 MB | 15 MB | Rust |
| OpenCode | 130 MB | 130 MB | Go |
| Claude Code | 360 MB | 746 MB | Node.js/React |
That's a 24x to 50x difference for tools that do the same thing: send text to an API.
vmmap shows Claude Code reserves 32.8 GB virtual memory just for the V8 heap, has 45% malloc fragmentation, and a peak footprint of 746 MB that never gets released, classic leak pattern.
On my 16 GB Mac, a "normal" workload (2 Claude sessions + browser + terminal) pushes me into 9.5 GB swap within hours. My laptop genuinely runs slower with Claude Code than when I'm running local LLMs.
I get that shipping fast matters, but building a CLI with React and a full Node.js runtime is an architectural choice with consequences. Codex proves this can be done in 15 MB. Every Claude Code session costs me 360+ MB, and with MCP servers spawning per session, it multiplies fast.
reply
atonse 11 hours ago | root | parent | next [–]
Jarred Sumner (bun creator, bun was recently acquired by Anthropic) has been working exclusively on bringing down memory leaks and improving performance in CC the last couple weeks. He's been tweeting his progress.
This is just regular tech debt that happens from building something to $1bn in revenue as fast as you possibly can, optimize later.
They're optimizing now. I'm sure they'll have it under control in no time.
CC is an incredible product (so is codex but I use CC more). Yes, lately it's gotten bloated, but the value it provides makes it bearable until they fix it in short time.
reply
bdangubic 11 hours ago | root | parent | next [–]
if I had a dollar for each time I heard “until they fix it in short time” I’d have Elon money
reply
Weryj 13 hours ago | root | parent | prev | next [–]
I believe they use https://bun.com/ Not Node.js
reply
badlogic 8 hours ago | root | parent | prev | next [–]
OpenCode is not written in Go. It's TS on Bun, with OpenTUI underneath which is written in Zig.
reply
slopusila 11 hours ago | root | parent | prev | next [–]
why do you care about uncommitted virtual memory? that's practically infinite
reply
krona 15 hours ago | root | parent | prev | next [–]
Sounds like a web developer defined the solution a year before they knew what the problem was.
reply
jama211 5 hours ago | root | parent | next [–]
Nah. It’s just web development languages are a better fit for agentic coding presently. They weighed the pros and cons, they’re not stupid.
reply
barnabee 1 hour ago | root | parent | next [–]
I’ve had good success with Claude building snappy TUIs in Rust with Ratatui.
It’s not obvious to me that there’d be any benefit of using TypeScript and React instead, especially none that makes up for the huge downsides compared to Rust in a terminal environment.
Seems to me the problem is more likely the skills of the engineers, not Claude’s capabilities.
reply
int_19h 42 minutes ago | root | parent | prev | next [–]
It's a popular myth, but not really true anymore with the latest and greatest. I'm currently using both Claude and Codex to work on a Haskell codebase, and it works wonderfully. More so than JS actually, since the type system provides extensive guardrails (you can get types with TS, but it's not sound, and it's very easy to write code that violates type constraints at runtime without even deliberately trying to do so).
reply
shimman 5 hours ago | root | parent | prev | next [–]
Of course they can be stupid, hubris is a real thing and humans fail all the time.
reply
jama211 15 hours ago | root | parent | prev | next [–]
There’s nothing wrong with that, except it lets ai skeptics feel superior
reply
RohMin 15 hours ago | root | parent | next [–]
https://www.youtube.com/watch?v=LvW1HTSLPEk
I thought this was a solid take
reply
jdthedisciple 14 hours ago | root | parent | next [–]
interesting
reply
everforward 10 hours ago | root | parent | prev | next [–]
There are absolutely things wrong with that, because React was designed to solve problems that don't exist in a TUI.
React fixes issues with the DOM being too slow to fully re-render the entire webpage every time a piece of state changes. That doesn't apply in a TUI, you can re-render TUIs faster than the monitor can refresh. There's no need to selectively re-render parts of the UI, you can just re-render the entire thing every time something changes without even stressing out the CPU.
It brings in a bunch of complexity that doesn't solve any real issues beyond the devs being more familiar with React than a TUI library.
reply
jama211 5 hours ago | root | parent | next [–]
It is demonstrably absolutely fine. Sheesh.
reply
overgard 12 hours ago | root | parent | prev | next [–]
I haven't looked at it directly, so I can speak on quality, but it's a pretty weird way to write a terminal app
reply
jama211 5 hours ago | root | parent | next [–]
It’s unusual but it’s a better fit for agentic coding so it makes sense
reply
3836293648 14 hours ago | root | parent | prev | next [–]
Oh come on. It's massively wrong. It is always wrong. It's not always wrong enough to be important, but it doesn't stop being wrong
reply
jama211 5 hours ago | root | parent | next [–]
No, it’s not.
reply
vntok 13 hours ago | root | parent | prev | next [–]
You should elaborate. What are your criteria and why do you think they should matter to actual users?
reply
exe34 15 hours ago | root | parent | prev | next [–]
I use AI and I can call AI slop shit if it smells like shit.
reply
jama211 5 hours ago | root | parent | next [–]
And this doesn’t.
reply
thehamkercat 15 hours ago | root | parent | prev | next [–]
Same with opencode and gemini, it's disgusting
Codex (by openai ironically) seems to be the fastest/most-responsive, opens instantly and is written in rust but doesn't contain that many features
Claude opens in around 3-4 seconds
Opencode opens in 2 seconds
Gemini-cli is an abomination which opens in around 16 second for me right now, and in 8 seconds on a fresh install
Codex takes 50ms for reference...
--
If their models are so good, why are they not rewriting their own react in cli bs to c++ or rust for 100x performance improvement (not kidding, it really is that much)
reply
g947o 15 hours ago | root | parent | next [–]
Great question, and my guess:
If you build React in C++ and Rust, even if the framework is there, you'll likely need to write your components in C++/Rust. That is a difficult problem. There are actually libraries out there that allow you to build web UI with Rust, although they are for web (+ HTML/CSS) and not specifically CLI stuff.
So someone needs to create such a library that is properly maintained and such. And you'll likely develop slower in Rust compared to JS.
These companies don't see a point in doing that. So they just use whatever already exists.
reply
shoeb00m 15 hours ago | root | parent | next [–]
Opencode wrote their own tui library in zig, and then build a solidjs library on top of that.
https://github.com/anomalyco/opentui
reply
g947o 12 hours ago | root | parent | next [–]
This has nothing to do with React style UI building.
reply
Philpax 15 hours ago | root | parent | prev | next [–]
Those Rust libraries have existed for some time:
- https://github.com/ratatui/ratatui
- https://github.com/ccbrown/iocraft
- https://crates.io/crates/dioxus-tui
reply
g947o 12 hours ago | root | parent | next [–]
Where is React? These are TUI libraries, which are not the same thing
reply
Philpax 12 hours ago | root | parent | next [–]
iocraft and dioxus-tui implement the React model, or derivatives of it.
reply
pdntspa 12 hours ago | root | parent | prev | next [–]
and why do they need react...
reply
Philpax 12 hours ago | root | parent | next [–]
That's actually relatively understandable. The React model (not necessarily React itself) of compositional reactive one-way data binding has become dominant in UI development over the last decade because it's easy to work with and does not require you to keep track of the state of a retained UI.
Most modern UI systems are inspired by React or a variant of its model.
reply
jama211 5 hours ago | root | parent | next [–]
Well said.
reply
azinman2 15 hours ago | root | parent | prev | next [–]
Why does it matter if Claude Code opens in 3-4 seconds if everything you do with it can take many seconds to minutes? Seems irrelevant to me.
reply
RohMin 15 hours ago | root | parent | next [–]
I guess with ~50 years of CPU advancements, 3-4 seconds for a TUI to open makes it seem like we lost the plot somewhere along the way.
reply
strange_quark 14 hours ago | root | parent | next [–]
Don’t forget they’ve also publicly stated (bragged?) about the monumental accomplishment of getting some text in a terminal to render at 60fps.
reply
jama211 5 hours ago | root | parent | prev | next [–]
So it doesn’t matter at all except to your sensibilities. Sounds to me that they simply are much better at prioritisation than your average HN user, who’d have taken forever to release it but at least the terminal interface would be snappy…
reply
barnabee 1 hour ago | root | parent | prev | next [–]
Some people[0] like their tools to be well engineered. This is not unique to software.
[0] Perhaps everyone who actually takes pride in their craft and doesn’t prioritise shitty hustle culture and making money over everything else.
reply
wahnfrieden 15 hours ago | root | parent | prev | next [–]
Because when the agent is taking many seconds to minutes, I am starting new agents instead of waiting or switching to non-agent tasks
reply
mbesto 14 hours ago | root | parent | prev | next [–]
This is exactly the type of thing that AI code writers don't do well - understand the prioritization of feature development.
Some developers say 3-4 seconds are important to them, others don't. Who decides what the truth is? A human? ClawdBot?
reply
sumedh 1 hour ago | root | parent | next [–]
> Some developers say 3-4 seconds are important to them, others don't.
Wasnt GTA 5 famous for very long start up time and turns out there some bug which some random developer/gamer found out and gave them a fix?
Most Gamers didnt care, they still played it.
reply
jama211 5 hours ago | root | parent | prev | next [–]
The humans in the company (correctly) realised that a few seconds to open basically the most powerful productivity agent ever made so they can focus on fast iteration of features is a totally acceptable trade off priority wise. Who would think differently???
reply
shoeb00m 15 hours ago | root | parent | prev | next [–]
codex cli is missing a bunch of ux features like resizing on terminal size change.
Opencode's core is actually written in zig, only ui orchestration is in solidjs. It's only slightly slower to load than neo-vim on my system.
https://github.com/anomalyco/opentui
reply
wahnfrieden 15 hours ago | root | parent | prev | next [–]
Codex team made the right call to rewrite its TypeScript to Rust early on
reply
bdangubic 11 hours ago | root | parent | prev | next [–]
50ms to open and then 2hrs to solve a simple problem vs 4s to open and then 5m to solve a problem, eh?
reply
jama211 5 hours ago | root | parent | next [–]
lol right? I feel like I’m taking crazy pills here. Why do people here want to prioritise the most pointless things? Oh right it’s because they’re bitter and their reaction is mostly emotional…
reply
tayo42 15 hours ago | root | parent | prev | next [–]
Is this a react feature or did they build something to translate react to text for display in the terminal?
reply
sbarre 15 hours ago | root | parent | next [–]
React, the framework, is separate from react-dom, the browser rendering library. Most people think of those two as one thing because they're the most popular combo.
But there are many different rendering libraries you can use with React, including Ink, which is designed for building CLI TUIs..
reply
skydhash 13 hours ago | root | parent | next [–]
Anyone that knows a bit about terminals would already know that using React is not a good solution for TUI. Terminal rendering is done as a stream of characters which includes both the text and how it displays, which can also alter previously rendered texts. Diffing that is nonsense.
reply
9dev 12 hours ago | root | parent | next [–]
You’re not diffing that, though. The app keeps a virtual representation of the UI state in a tree structure that it diffs on, then serializes that into a formatted string to draw to the out put stream. It’s not about limiting the amount of characters redrawn (that would indeed be nonsense), but handling separate output regions effectively.
reply
pkkim 15 hours ago | root | parent | prev | next [–]
They used Ink: https://github.com/vadimdemedes/ink
I've used it myself. It has some rough edges in terms of rendering performance but it's nice overall.
reply
tayo42 15 hours ago | root | parent | next [–]
Thats pretty interesting looking, thanks!
reply
embedding-shape 15 hours ago | root | parent | prev | next [–]
Not a built-in React feature. The idea been around for quite some time, I came across it initially with https://github.com/vadimdemedes/ink back in 2022 sometime.
reply
tayo42 15 hours ago | root | parent | prev | next [–]
i had claude make a snake clone and fix all the flickering in like 20 minutes with the library mentioned lol
reply
CamperBob2 15 hours ago | root | parent | prev | next [–]
Also explains why Claude Code is a React app outputting to a Terminal. (Seriously.)
Who cares, and why?
All of the major providers' CLI harnesses use Ink: https://github.com/vadimdemedes/ink
reply
sweetheart 15 hours ago | root | parent | prev | next [–]
React's core is agnostic when it comes to the actual rendering interface. It's just all the fancy algos for diffing and updating the underlying tree. Using it for rendering a TUI is a very reasonable application of the technology.
reply
skydhash 14 hours ago | root | parent | next [–]
The terminal UI is not a tree structure that you can diff. It’s a 2D cells of characters, where every manipulation is a stream of texts. Refreshing or diffing that makes no sense.
reply
HarHarVeryFunny 11 hours ago | root | parent | next [–]
IMO diffing might have made sense to do here, but that's not what they chose to do.
What's apparently happening is that React tells Ink to update (re-render) the UI "scene graph", and Ink then generates a new full-screen image of how the terminal should look, then passes this screen image to another library, log-update, to draw to the terminal. log-update draws these screen images by a flicker-inducing clear-then-redraw, which it has now fixed by using escape codes to have the terminal buffer and combine these clear-then-redraw commands, thereby hiding the clear.
An alternative solution, rather than using the flicker-inducing clear-then-redraw in the first place, would have been just to do terminal screen image diffs and draw the changes (which is something I did back in the day for fun, sending full-screen ASCII digital clock diffs over a slow 9600baud serial link to a real terminal).
reply
skydhash 10 hours ago | root | parent | next [–]
Any diff would require to have a Before and an After. Whatever was done for the After can be done to directly render the changes. No need for the additional compute of a diff.
reply
HarHarVeryFunny 9 hours ago | root | parent | next [–]
Sure, you could just draw the full new screen image (albeit a bit inefficient if only one character changed), and no need for the flicker-inducing clear before draw either.
I'm not sure what the history of log-output has been or why it does the clear-before-draw. Another simple alternative to pre-clear would have been just to clear to end of line (ESC[0K) after each partial line drawn.
reply
Longwelwind 13 hours ago | root | parent | prev | next [–]
When doing advanced terminal UI, you might at some point have to layout content inside the terminal. At some point, you might need to update the content of those boxes because the state of the underlying app has changed. At that point, refreshing and diffing can make sense. For some, the way React organizes logic to render and update an UI is nice and can be used in other contexts.
reply
skydhash 13 hours ago | root | parent | next [–]
How big is the UI state that it makes sense to bring in React and the related accidental complexity? I’m ready to bet that no TUI have that big of a state.
reply
bizzleDawg 13 hours ago | root | parent | prev | next [–]
Only in the same way that the pixels displayed in a browser are not a tree structure that you can diff - the diffing happens at a higher level of abstraction than what's rendered.
Diffing and only updating the parts of the TUI which have changed does make sense if you consider the alternative is to rewrite the entire screen every "frame". There are other ways to abstract this, e.g. a library like tqmd for python may well have a significantly more simple abstraction than a tree for storing what it's going to update next for the progress bar widget than claude, but it also provides a much more simple interface.
To me it seems more fair game to attack it for being written in JS than for using a particular "rendering" technique to minimise updates sent to the terminal.
reply
skydhash 13 hours ago | root | parent | next [–]
Most UI library store states in tree of components. And if you’re creating a custom widget, they will give you a 2D context for the drawing operations. Using react makes sense in those cases because what you’re diffing is state, then the UI library will render as usual, which will usually be done via compositing.
The terminal does not have a render phase (or an update state phase). You either refresh the whole screen (flickering) or control where to update manually (custom engine, may flicker locally). But any updates are sequential (moving the cursor and then sending what to be displayed), not at once like 2D pixel rendering does.
So most TUI only updates when there’s an event to do so or at a frequency much lower than 60fps. This is why top and htop have a setting for that. And why other TUI software propose a keybind to refresh and reset their rendering engines.
reply
sweetheart 11 hours ago | root | parent | prev | next [–]
The "UI" is indeed represented in memory in tree-like structure for which positioning is calculated according to a flexbox-like layout algo. React then handles the diffing of this structure, and the terminal UI is updated according to only what has changed by manually overwriting sections of the buffer. The CLI library is called Ink and I forget the name of the flexbox layout algo implementation, but you can read about the internals if you look at the Ink repo.
reply
CooCooCaCha 15 hours ago | root | parent | prev | next [–]
It’s really not that crazy.
React itself is a frontend-agnostic library. People primarily use it for writing websites but web support is actually a layer on top of base react and can be swapped out for whatever.
So they’re really just using react as a way to organize their terminal UI into components. For the same reason it’s handy to organize web ui into components.
reply
dreamteam1 13 hours ago | root | parent | next [–]
And some companies use it to write start menus.
reply
quietsegfault 11 hours ago | parent | prev | next [–]
What does it explain, oh snark master supreme?
reply
spruce_tips 15 hours ago | parent | prev | next [–]
Ah yes, explains why it takes 3 seconds for a new chat to load after I click new chat in the macOS app.
reply
exe34 15 hours ago | parent | prev | next [–]
Can Claude fix the flicker in Claude yet?
reply
nickstinemates 14 hours ago | root | parent | next [7 more]
anupamchugh 6 hours ago | prev | next [–]
Agent teams in this release is mcp-agent-mail [1] built into
the runtime. Mailbox, task list, file locking — zero config,
just works. I forked agent-mail [2], added heartbeat/presence
tracking, had a PR upstream [3] when agent teams dropped. For
coordinating Claude Code instances within a session, the
built-in version wins on friction alone.
Where it stops: agent teams is session-scoped. I run Claude
Code during the day, hand off to Codex overnight, pick up in
the morning. Different runtimes, async, persistent. Agent
teams dies when you close the terminal — no cross-tool
messaging, no file leases, no audit trail that outlives the
session.
What survives sherlocking is whatever crosses the runtime
boundary. The built-in version will always win inside its own
walls — less friction, zero setup. The cross-tool layer is
where community tooling still has room. Until that gets
absorbed too.
[1] https://github.com/Dicklesworthstone/mcp_agent_mail
[2] https://github.com/anupamchugh/mcp_agent_mail
[3]
https://github.com/Dicklesworthstone/mcp_agent_mail/pull/77
reply
Someone1234 16 hours ago | prev | next [–]
Does anyone with more insight into the AI/LLM industry happen to know if the cost to run them in normal user-workflows is falling? The reason I'm asking is because "agent teams" while a cool concept, it largely constrained by the economics of running multiple LLM agents (i.e. plans/API calls that make this practical at scale are expensive).
A year or more ago, I read that both Anthropic and OpenAI were losing money on every single request even for their paid subscribers, and I don't know if that has changed with more efficient hardware/software improvements/caching.
reply
simonw 15 hours ago | parent | next [–]
The cost per token served has been falling steadily over the past few years across basically all of the providers. OpenAI dropped the price they charged for o3 to 1/5th of what it was in June last year thanks to "engineers optimizing inferencing", and plenty of other providers have found cost savings too.
Turns out there was a lot of low-hanging fruit in terms of inference optimization that hadn't been plucked yet.
> A year or more ago, I read that both Anthropic and OpenAI were losing money on every single request even for their paid subscribers
Where did you hear that? It doesn't match my mental model of how this has played out.
reply
cootsnuck 15 hours ago | root | parent | next [–]
I have not see any reporting or evidence at all that Anthropic or OpenAI is able to make money on inference yet.
> Turns out there was a lot of low-hanging fruit in terms of inference optimization that hadn't been plucked yet.
That does not mean the frontier labs are pricing their APIs to cover their costs yet.
It can both be true that it has gotten cheaper for them to provide inference and that they still are subsidizing inference costs.
In fact, I'd argue that's way more likely given that has been precisely the goto strategy for highly-competitive startups for awhile now. Price low to pump adoption and dominate the market, worry about raising prices for financial sustainability later, burn through investor money until then.
What no one outside of these frontier labs knows right now is how big the gap is between current pricing and eventual pricing.
reply
chis 15 hours ago | root | parent | next [–]
It's quite clear that these companies do make money on each marginal token. They've said this directly and analysts agree [1]. It's less clear that the margins are high enough to pay off the up-front cost of training each model.
[1] https://epochai.substack.com/p/can-ai-companies-become-profi...
reply
m101 13 hours ago | root | parent | next [–]
It’s not clear at all because model training upfront costs and how you depreciate them are big unknowns, even for deprecated models. See my last comment for a bit more detail.
reply
simonw 10 hours ago | root | parent | next [–]
They are obviously losing money on training. I think they are selling inference for less than what it costs to serve these tokens.
That really matters. If they are making a margin on inference they could conceivably break even no matter how expensive training is, provided they sign up enough paying customers.
If they lose money on every paying customer then building great products that customers want to pay for them will just make their financial situation worse.
reply
ACCount37 12 hours ago | root | parent | prev | next [–]
By now, model lifetime inference compute is >10x model training compute, for mainstream models. Further amortized by things like base model reuse.
reply
magicalist 14 hours ago | root | parent | prev | next [–]
> They've said this directly and analysts agree [1]
chasing down a few sources in that article leads to articles like this at the root of claims[1], which is entirely based on information "according to a person with knowledge of the company’s financials", which doesn't exactly fill me with confidence.
[1] https://www.theinformation.com/articles/openai-getting-effic...
reply
mrgaro 1 hour ago | root | parent | next [–]
But there are companies which are only serving open weight models via APIs (ie. they are not doing any training), so they must be profitable? here's one list of providers from OpenRouter serving LLama 3.3 70B: https://openrouter.ai/meta-llama/llama-3.3-70b-instruct/prov...
reply
simonw 10 hours ago | root | parent | prev | next [–]
"according to a person with knowledge of the company’s financials" is how professional journalists tell you that someone who they judge to be credible has leaked information to them.
I wrote a guide to deciphering that kind of language a couple of years ago: https://simonwillison.net/2023/Nov/22/deciphering-clues/
reply
9cb14c1ec0 14 hours ago | root | parent | prev | next [–]
It's also true that their inference costs are being heavily subsidized. For example, if you calculate Oracles debt into OpenAIs revenue, they would be incredibly far underwater on inference.
reply
emp17344 10 hours ago | root | parent | prev | next [–]
Sue, but if they stop training new models, the current models will be useless in a few years as our knowledge base evolves. They need to continually train new models to have a useful product.
reply
NitpickLawyer 15 hours ago | root | parent | prev | next [–]
> they still are subsidizing inference costs.
They are for sure subsidising costs on all you can prompt packages (20-100-200$ /mo). They do that for data gathering mostly, and at a smaller degree for user retention.
> evidence at all that Anthropic or OpenAI is able to make money on inference yet.
You can infer that from what 3rd party inference providers are charging. The largest open models atm are dsv3 (~650B params) and kimi2.5 (1.2T params). They are being served at 2-2.5-3$ /Mtok. That's sonnet / gpt-mini / gemini3-flash price range. You can make some educates guesses that they get some leeway for model size at the 10-15$/ Mtok prices for their top tier models. So if they are inside some sane model sizes, they are likely making money off of token based APIs.
reply
int_19h 31 minutes ago | root | parent | next [–]
> They are being served at 2-2.5-3$ /Mtok. That's sonnet / gpt-mini / gemini3-flash price range.
The interesting number is usually input tokens, not output, because there's much more of the former in any long-running session (like say coding agents) since all outputs become inputs for the next iteration, and you also have tool calls adding a lot of additional input tokens etc.
It doesn't change your conclusion much though. Kimi K2.5 has almost the same input token pricing as Gemini 3 Flash.
reply
slopusila 11 hours ago | root | parent | prev | next [–]
most of those subscriptions go unused. I barely use 10% of mine
so my unused tokens compensate for the few heavy users
reply
sandos 2 hours ago | root | parent | next [–]
Ive been thinking about our company, one of big global conglomerates that went for copilot. Suddenly I was just enrolled.. together with at least 1500 others. I guess the amount of money for our business copilot plans x 1500 is not a huge amount of money, but I am at least pretty convinced that only a small part of users use even 10% of their quota. Even teams located around me, I only know of 1 person that seems to use it actively.
reply
aenis 8 hours ago | root | parent | prev | next [–]
Thanks!
I hope my unused gym subscription pays back the good karma :-)
reply
mrandish 14 hours ago | root | parent | prev | next [–]
> I have not see any reporting or evidence at all that Anthropic or OpenAI is able to make money on inference yet.
Anthropic planning an IPO this year is a broad meta-indicator that internally they believe they'll be able to reach break-even sometime next year on delivering a competitive model. Of course, their belief could turn out to be wrong but it doesn't make much sense to do an IPO if you don't think you're close. Assuming you have a choice with other options to raise private capital (which still seems true), it would be better to defer an IPO until you expect quarterly numbers to reach break-even or at least close to it.
Despite the willingness of private investment to fund hugely negative AI spend, the recently growing twitchiness of public markets around AI ecosystem stocks indicates they're already worried prices have exceeded near-term value. It doesn't seem like they're in a mood to fund oceans of dotcom-like red ink for long.
reply
defmacr0 22 minutes ago | root | parent | next [–]
>Despite the willingness of private investment to fund hugely negative AI spend
VC firms, even ones the size of Softbank, also literally just don't have enough capital to fund the planned next-generation gigawatt-scale data centers.
reply
WarmWash 14 hours ago | root | parent | prev | next [–]
IPO'ing is often what you do to give your golden investors an exit hatch to dump their shares on the notoriously idiotic and hype driven public.
reply
barrkel 15 hours ago | root | parent | prev | next [–]
> evidence at all that Anthropic or OpenAI is able to make money on inference yet.
The evidence is in third party inference costs for open source models.
reply
nubg 15 hours ago | root | parent | prev | next [–]
> "engineers optimizing inferencing"
are we sure this is not a fancy way of saying quantization?
reply
bityard 13 hours ago | root | parent | next [–]
When MP3 became popular, people were amazed that you could compress audio to 1/10th its size with minor quality loss. A few decades later, we have audio compression that is much better and higher-quality than MP3, and they took a lot more effort than "MP3 but at a lower bitrate."
The same is happening in AI research now.
reply
oblio 2 hours ago | root | parent | next [–]
> A few decades later, we have audio compression that is much better and higher-quality than MP3
Just curious, which formats and how they compare, storage wise?
Also, are you sure it's not just moving the goalposts to CPU usage? Frequently more powerful compression algorithms can't be used because they use lots of processing power, so frequently the biggest gains over 20 years are just... hardware advancements.
reply
esafak 14 hours ago | root | parent | prev | next [–]
Someone made a quality tracker: https://marginlab.ai/trackers/claude-code/
reply
embedding-shape 15 hours ago | root | parent | prev | next [–]
Or distilled models, or just slightly smaller models but same architecture. Lots of options, all of them conveniently fitting inside "optimizing inferencing".
reply
simonw 10 hours ago | root | parent | prev | next [–]
The o3 optimizations were not quantization, they confirmed this at the time.
reply
jmalicki 15 hours ago | root | parent | prev | next [–]
A ton of GPU kernels are hugely inefficient. Not saying the numbers are realistic, but look at the 100s of times of gain in the Anthropic performance takehome exam that floated around on here.
And if you've worked with pytorch models a lot, having custom fused kernels can be huge. For instance, look at the kind of gains to be had when FlashAttention came out.
This isn't just quantization, it's actually just better optimization.
Even when it comes to quantization, Blackwell has far better quantization primitives and new floating point types that support row or layer-wise scaling that can quantize with far less quality reduction.
There is also a ton of work in the past year on sub-quadratic attention for new models that gets rid of a huge bottleneck, but like quantization can be a tradeoff, and a lot of progress has been made there on moving the Pareto frontier as well.
It's almost like when you're spending hundreds of billions on capex for GPUs, you can afford to hire engineers to make them perform better without just nerfing the models with more quantization.
reply
Der_Einzige 15 hours ago | root | parent | next [–]
"This isn't X, it's Y" with extra steps.
reply
jmalicki 14 hours ago | root | parent | next [–]
I'm flattered you think I wrote as well as an AI.
reply
nubg 12 hours ago | root | parent | next [–]
lmao
reply
replwoacause 9 hours ago | root | parent | prev | next [–]
My experience trying to use Opus 4.5 on the Pro plan has been terrible. It blows up my usage very very fast. I avoid it altogether now. Yes, I know they warn about this, but it's comically fast how quickly it happens.
reply
sumitkumar 15 hours ago | root | parent | prev | next [–]
It seems it is true for gemini because they have a humongous sparse model but it isn't so true for the max performance opus-4.5/6 and gpt-5.2/3.
reply
Aurornis 15 hours ago | parent | prev | next [–]
> A year or more ago, I read that both Anthropic and OpenAI were losing money on every single request even for their paid subscribers
This gets repeated everywhere but I don't think it's true.
The company is unprofitable overall, but I don't see any reason to believe that their per-token inference costs are below the marginal cost of computing those tokens.
It is true that the company is unprofitable overall when you account for R&D spend, compensation, training, and everything else. This is a deliberate choice that every heavily funded startup should be making, otherwise you're wasting the investment money. That's precisely what the investment money is for.
However I don't think using their API and paying for tokens has negative value for the company. We can compare to models like DeepSeek where providers can charge a fraction of the price of OpenAI tokens and still be profitable. OpenAI's inference costs are going to be higher, but they're charging such a high premium that it's hard to believe they're losing money on each token sold. I think every token paid for moves them incrementally closer to profitability, not away from it.
reply
3836293648 14 hours ago | root | parent | next [–]
The reports I remember show that they're profitable per-model, but overlap R&D so that the company is negative overall. And therefore will turn a massive profit if they stop making new models.
reply
schnable 13 hours ago | root | parent | next [–]
* stop making new models and people keep using the existing models, not switch to a competitor still investing in new models.
reply
trcf23 14 hours ago | root | parent | prev | next [–]
Doesn’t it also depend on averaging with free users?
reply
runarberg 15 hours ago | root | parent | prev | next [–]
I can see a case for omitting R&D when talking about profitability, but training makes no sense. Training is what makes the model, omitting it is like omitting the cost of running the production facility of a car manufacturer. If AI companies stop training they will stop producing models, and they will run out of a products to sell.
reply
vidarh 13 hours ago | root | parent | next [–]
The reason for this is that the cost scales with the model and training cadence, not usage and so they will hope that they will be able to scale number of inference tokens sold both by increasing use and/or slowing the training cadence as competitors are also forced to aim for overall profitability.
It is essentially a big game of venture capital chicken at present.
reply
Aurornis 13 hours ago | root | parent | prev | next [–]
It depends on what you're talking about
If you're looking at overall profitability, you include everything
If you're talking about unit economics of producing tokens, you only include the marginal cost of each token against the marginal revenue of selling that token
reply
runarberg 12 hours ago | root | parent | next [–]
I don’t understand the logic. Without training the marginal cost of each token goes into nothing. The more you train, the better the model, and (presumably) you will gain more costumer interest. Unlike R&D you will always have to train new models if you want to keep your customers.
To me this looks likes some creative bookkeeping, or even wishful thinking. It is like if SpaceX omits the price of the satellites when calculating their profits.
reply
nodja 13 hours ago | parent | prev | next [–]
> A year or more ago, I read that both Anthropic and OpenAI were losing money on every single request even for their paid subscribers, and I don't know if that has changed with more efficient hardware/software improvements/caching.
This is obviously not true, you can use real data and common sense.
Just look up a similar sized open weights model on openrouter and compare the prices. You'll note the similar sized model is often much cheaper than what anthropic/openai provide.
Example: Let's compare claude 4 models with deepseek. Claude 4 is ~400B params so it's best to compare with something like deepseek V3 which is 680B params.
Even if we compare the cheapest claude model to the most expensive deepseek provider we have claude charging $1/M for input and $5/M for output, while deepseek providers charge $0.4/M and $1.2/M, a fifth of the price, you can get it as cheap as $.27 input $0.4 output.
As you can see, even if we skew things overly in favor of claude, the story is clear, claude token prices are much higher than they could've been. The difference in prices is because anthropic also needs to pay for training costs, while openrouter providers just need to worry on making serving models profitable. Deepseek is also not as capable as claude which also puts down pressure on the prices.
There's still a chance that anthropic/openai models are losing money on inference, if for example they're somehow much larger than expected, the 400B param number is not official, just speculative from how it performs, this is only taking into account API prices, subscriptions and free user will of course skew the real profitability numbers, etc.
Price sources:
https://openrouter.ai/deepseek/deepseek-v3.2-speciale
https://claude.com/pricing#api
reply
Someone1234 13 hours ago | root | parent | next [–]
> This is obviously not true, you can use real data and common sense.
It isn't "common sense" at all. You're comparing several companies losing money, to one another, and suggesting that they're obviously making money because one is under-cutting another more aggressively.
LLM/AI ventures are all currently under-water with massive VC or similar money flowing in, they also all need training data from users, so it is very reasonable to speculate that they're in loss-leader mode.
reply
mrgaro 1 hour ago | root | parent | next [–]
There are companies which are only serving open weight models and not doing any training, so they must be profitable? Check for example this list https://openrouter.ai/meta-llama/llama-3.3-70b-instruct/prov...
reply
nodja 12 hours ago | root | parent | prev | next [–]
Doing some math in my head, buying the GPUs at retail price, it would take probably around half a year to make the money back, probably more depending how expensive electricity is in the area you're serving from. So I don't know where this "losing money" rhetoric is coming from. It's probably harder to source the actual GPUs than making money off them.
reply
defmacr0 19 minutes ago | root | parent | next [–]
> So I don't know where this "losing money" rhetoric is coming from.
https://www.dbresearch.com/PROD/RI-PROD/PROD0000000000611818...
reply
suddenlybananas 10 hours ago | root | parent | prev | next [–]
electricity
reply
tqian 6 hours ago | root | parent | prev | next [–]
To borrow a concept of cloud server renting, there's also the factor of overselling. Most open source LLM operators probably oversell quite a bit - they don't scale up resources as fast as OpenAI/Anthropic when requests increase. I notice many openrouter providers are noticeably faster during off hours.
In other words, it's not just the model size, but also concurrent load and how many gpus do you turn on at any time. I bet the big players' cost is quite a bit higher than the numbers on openrouter, even for comparable model parameters.
reply
zozbot234 15 hours ago | parent | prev | next [–]
> i.e. plans/API calls that make this practical at scale are expensive
Local AI's make agent workflows a whole lot more practical. Making the initial investment for a good homelab/on-prem facility will effectively become a no-brainer given the advantages on privacy and reliability, and you don't have to fear rugpulls or VC's playing the "lose money on every request" game since you know exactly how much you're paying in power costs for your overall load.
reply
vbezhenar 14 hours ago | root | parent | next [–]
I don't care about privacy and I didn't have much problems with reliability of AI companies. Spending ridiculous amount of money on hardware that's going to be obsolete in a few years and won't be utilized at 100% during that time is not something that many people would do, IMO. Privacy is good when it's given for free.
I would rather spend money on some pseudo-local inference (when cloud company manages everything for me and I just can specify some open source model and pay for GPU usage).
reply
slopusila 11 hours ago | root | parent | prev | next [–]
on prem economics dont work because you can't batch requests. unless you are able to run 100 agents at the same time all the time
reply
zozbot234 8 hours ago | root | parent | next [–]
> unless you are able to run 100 agents at the same time all the time
Except that newer "agent swarm" workflows do exactly that. Besides, batching requests generally comes with a sizeable increase in memory footprint, and memory is often the main bottleneck especially with the larger contexts that are typical of agent workflows. If you have plenty of agentic tasks that are not especially latency-critical and don't need the absolutely best model, it makes plenty of sense to schedule these for running locally.
reply
Havoc 15 hours ago | parent | prev | next [–]
Saw a comment earlier today about google seeing a big (50%+) fall in Gemini serving cost per unit across 2025 but can’t find it now. Was either here or on Reddit
reply
mattddowney 15 hours ago | root | parent | next [–]
From Alphabet 2025 Q4 Earnings call: "As we scale, we’re getting dramatically more efficient. We were able to lower Gemini serving unit costs by 78% over 2025 through model optimizations, efficiency and utilization improvements." https://abc.xyz/investor/events/event-details/2026/2025-Q4-E...
reply
Havoc 12 hours ago | root | parent | next [–]
Thanks! That's the one
reply
m101 13 hours ago | parent | prev | next [–]
I think actually working out whether they are losing money is extremely difficult for current models but you can look backwards. The big uncertainties are:
1) how do you depreciate a new model? What is its useful life? (Only know this once you deprecate it)
2) how do you depreciate your hardware over the period you trained this model? Another big unknown and not known until you finally write the hardware off.
The easy thing to calculate is whether you are making money actually serving the model. And the answer is almost certainly yes they are making money from this perspective, but that’s missing a large part of the cost and is therefore wrong.
reply
KaiserPro 14 hours ago | parent | prev | next [–]
Gemini-pro-preview is on ollama and requires h100 which is ~$15-30k. Google are charging $3 a million tokens. Supposedly its capable of generating between 1 and 12 million tokens an hour.
Which is profitable. but not by much.
reply
grim_io 12 hours ago | root | parent | next [–]
What do you mean it's on ollama and requires h100? As a proprietary google model, it runs on their own hardware, not nvidia.
reply
KaiserPro 11 hours ago | root | parent | next [–]
sorry A lack of context:
https://ollama.com/library/gemini-3-pro-preview
You can run it on your own infra. Anthropic and openAI are running off nvidia, so are meta(well supposedly they had custom silicon, I'm not sure if its capable of running big models) and mistral.
however if google really are running their own inference hardware, then that means the cost is different (developing silicon is not cheap...) as you say.
reply
simonw 10 hours ago | root | parent | next [–]
You can't run Gemini 3 Pro Preview on your own infrastructure. Ollama sell access to cloud models these days. It's a little weird and confusing.
reply
zozbot234 11 hours ago | root | parent | prev | next [–]
That's a cloud-linked model. It's about using ollama as an API client (for ease of compatibility with other uses, including local), not running that model on local infra. Google does release open models (called Gemma) but they're not nearly as capable.
reply
3abiton 15 hours ago | parent | prev | next [–]
It's not just that. Everyone is complacent with the utilization of AI agents. I have been using AI for coding for quite a while, and most of my "wasted" time is correcting its trajectory and guiding it through the thinking process. It's very fast iterations but it can easily go off track. Claude's family are pretty good at doing chained task, but still once the task becomes too big context wise, it's impossible to get back on track. Cost wise, it's cheaper than hiring skilled people, that's for sure.
reply
lufenialif2 15 hours ago | root | parent | next [–]
Cost wise, doesn’t that depend on what you could be doing besides steering agents?
reply
cyanydeez 14 hours ago | root | parent | next [–]
Isn't the quote something like: "If these LLMs are so good at producing products, where are all those products?"
reply
Bombthecat 15 hours ago | parent | prev | next [–]
That's why anthropic switched to tpu, you can sell at cost.
reply
WarmWash 14 hours ago | parent | prev | next [–]
These are intro prices.
This is all straight out of the playbook. Get everyone hooked on your product by being cheap and generous.
Raise the price to backpay what you gave away plus cover current expenses and profits.
In no way shape or form should people think these $20/mo plans are going to be the norm. From OpenAI's marketing plan, and a general 5-10 year ROI horizon for AI investment, we should expect AI use to cost $60-80/mo per user.
reply
esafak 9 hours ago | root | parent | next [–]
The models in 5-10 years are going to be unimaginably good. $100/month will be a bargain for knowledge workers, if they survive.
reply
rahulroy 12 hours ago | prev | next [–]
They are also giving away $50 extra pay as you go credit to try Opus 4.6. I just claimed it from the web usage page[1]. Are they anticipating higher token usage for the model or just want to promote the usage?
[1] https://claude.ai/settings/usage
reply
ptsd_dalmatian 2 hours ago | parent | next [–]
Based on email from Antrhopic, I’ve expected to get this automatically. I’ve met their conditions. Searching this thread for “50” got me to your comment and link worked. Thanks HN friend!
reply
rahulroy 2 hours ago | root | parent | next [–]
Haha! Glad it was helpful. Yes, I keep an eye on that page, so I was quick to notice.
reply
zamadatix 11 hours ago | parent | prev | next [–]
"Page not found" for me. I assume this is for currently paying accounts only or something (my subscription hasn't been active for a while), which is fair.
reply
rahulroy 7 hours ago | root | parent | next [–]
Yes, I'm on a paid subscription.
reply
anshumankmr 2 hours ago | parent | prev | next [–]
Damn this is awesome. I have some heavy PRs to crunch through.
reply
MaxikCZ 2 hours ago | parent | prev | next [–]
So thats 2M tokens for free basically?
reply
thunfischtoast 12 hours ago | parent | prev | next [–]
Thanks for the tip!
reply
rahulroy 2 hours ago | root | parent | next [–]
Glad that it was helpful. Thanks
reply
atonse 6 hours ago | prev | next [–]
Wow, I have been using Open 4.6 and for the last 15 minutes, and it's already made two extremely stupid mistakes... like misunderstanding basic instructions and editing the file in a very silly, basic way. Pretty bad. Never seen this with any model before.
The one bone I'll throw it was that I was asking it to edit its own MCP configs. So maybe it got thoroughly confused?
I dunno what's going on, I'm going to give it the night. It makes no sense whatsoever.
reply
sutterd 4 hours ago | parent | next [–]
I am also _not_ happy. I tried the `/model` command and I could not switch back to Opus 4.5. However, the command line option did let me set Opus 4.5:
``` claude --model claude-opus-4-5-20251101 ```
I will probably work with Opus 4.5 tomorrow to get some work done and maybe try 4.6 again later.
reply
sdf2erf 6 hours ago | parent | prev | next [–]
To me its obvious.
Theres a trade off going on - in order to handle more nuance/subtleties, the models are more likely to be wrong in their outputs and need more steering. This is why personally my use of them has reduced dramatically for what I do.
reply
replwoacause 9 hours ago | prev | next [–]
I feel like I can't even try this on the Pro plan because Anthropic has conditioned me to understand that even chatting lightly with the Opus model blows up usage and locks me out. So if I would normally use Sonnet 4.5 for a day's worth of work but I wake up and ask Opus a couple of questions, I might as well just forget about doing anything with Claude for the rest of the day lol. But so far I haven't had this issue with ChatGPT. Their 5.2 model (haven't tried 5.3) worked on something for 2 FREAKING HOURS and I still haven't run into any limits. So yeah, Opus is out for me now unfortunately. Hopefully they make the Sonnet model better though!
reply
blueblisters 13 minutes ago | parent | next [–]
Yeah same. Even though I find Opus-es to be more well-rounded (and more useful) for certain tasks, I instinctively reach for ChatGPT / codex to avoid burning up my usage limits for "trivial" work.
reply
greenavocado 9 hours ago | parent | prev | next [–]
That's why you use Opus for detailed planning docs and weaker models for implementation & RAG for more focused implementation
reply
replwoacause 9 hours ago | root | parent | next [–]
Exactly. I barely had a chance to kick the tires the couple of times I did this before it exploded my usage. I don’t just chat with it casually. The questions I asked were apart of an overall planning strategy which was never allowed to get off the ground on my tiny Pro plan.
reply
dmk 15 hours ago | prev | next [–]
The benchmarks are cool and all but 1M context on an Opus-class model is the real headline here imo. Has anyone actually pushed it to the limit yet? Long context has historically been one of those "works great in the demo" situations.
reply
pants2 15 hours ago | parent | next [–]
Paying $10 per request doesn't have me jumping at the opportunity to try it!
reply
cedws 14 hours ago | root | parent | next [–]
Makes me wonder: do employees at Anthropic get unmetered access to Claude models?
reply
danw1979 2 hours ago | root | parent | next [–]
Boris Cherny, creator of Claude Code, posted about how he used Claude a month ago. He’s got half a dozen Opus sessions on the burners constantly. So yes, I expect it’s unmetered.
https://x.com/bcherny/status/2007179832300581177
reply
swader999 12 hours ago | root | parent | prev | next [–]
It's like when you work at McDonald's and get one free meal a day. Lol, of course they get access to the full model way before we do...
reply
ajam1507 12 hours ago | root | parent | prev | next [–]
Seems quite obvious that they do, within reason.
reply
schappim 15 hours ago | root | parent | prev | next [–]
The only way to not go bankrupt is to use a Claude Code Max subscription…
reply
nomel 14 hours ago | parent | prev | next [–]
Has a "N million context window" spec ever been meaningful? Very old, very terrible, models "supported" 1M context window, but would lose track after two small paragraphs of context into a conversation (looking at you early Gemini).
reply
libraryofbabel 13 hours ago | root | parent | next [–]
Umm, Sonnet 4.5 has a 1m context window option if you are using it through the api, and it works pretty well. I tend not to reach for it much these days because I prefer Opus 4.5 so much that I don't mind the added pain of clearing context, but it's perfectly usable. I'm very excited I'll get this from Opus now too.
reply
nomel 10 hours ago | root | parent | next [–]
If you're getting on along with 4.5, then that suggests you didn't actually need the large context window, for your use. If that's true, what's the clear tell that it's working well? Am I misunderstanding?
Did they solve the "lost in the middle" problem? Proof will be in the pudding, I suppose. But that number alone isn't all that meaningful for many (most?) practical uses. Claude 4.5 often starts reverting bug fixes ~50k tokens back, which isn't a context window length problem.
Things fall apart much sooner than the context window length for all of my use cases (which are more reasoning related). What is a good use case? Do those use cases require strong verification to combat the "lost in the middle" problems?
reply
awestroke 15 hours ago | parent | prev | next [–]
Opus 4.5 starts being lazy and stupid at around the 50% context mark in my opinion, which makes me skeptical that this 1M context mode can produce good output. But I'll probably try it out and see
reply
hmaxwell 12 hours ago | prev | next [–]
I just tested both codex 5.3 and opus 4.6 and both returned pretty good output, but opus 4.6's limits are way too strict. I am probably going to cancel my Claude subscription for that reason:
What do you want to do?
1. Stop and wait for limit to reset
2. Switch to extra usage
3. Upgrade your plan
Enter to confirm · Esc to cancel
How come they don't have "Cancel your subscription and uninstall Claude Code"? Codex lasts for way longer without shaking me down for more money off the base $xx/month subscription.
reply
anshumankmr 1 hour ago | parent | next [–]
IF it helps, try hedging b/w Copilot, Claude, OpenCode and ChatGPT. That is how I have been managing off late. Claude for planning and some nasty things. ChatGPT for quick questions. OpenCode with Sonnet4.5 on Bedrock and Copilot with Sonnet4.5/Opus4.5 (LOL)
reply
ArchieScrivener 9 hours ago | parent | prev | next [–]
How else are they going to supplement their own development expenses? The more Claude Anthropic needs the less Claude the customer will get. By their own admission that is how the Anthropic model works. Their end value is in using vibe coders and engineers alike to create a persistent synthetic developer that replaces their own employees and most of their customers.
Scalable Intelligence is just a wrapper for centralized power. All Ai companies are headed that way.
reply
seunosewa 11 hours ago | parent | prev | next [–]
They introduced the low limit warning for Opus on claude.ai
reply
itay-maman 13 hours ago | prev | next [–]
Important: I didn't see opus 4.6 in claude code. I have native install (which is the recommended instllation). So, I re-run the installation command and, voila, I have it now (v 2.1.32)
Installation instructions: https://code.claude.com/docs/en/overview#get-started-in-30-s...
reply
insane_dreamer 13 hours ago | parent | next [–]
It’s there. I’m already using it
reply
minimaxir 16 hours ago | prev | next [–]
Will Opus 4.6 via Claude Code be able to access the 1M context limit? The cost increase by going above 200k tokens is 2x input, 1.5x output, which is likely worth it especially for people with the $100/$200 plans.
reply
CryptoBanker 15 hours ago | parent | next [–]
The 1M context is not available via subscription - only via API usage
reply
romanovcode 15 hours ago | root | parent | next [–]
Well this is extremely disappointing to say the least.
reply
ayhanfuat 15 hours ago | root | parent | next [–]
It says "subscription users do not have access to Opus 4.6 1M context at launch" so they are probably planning to roll it out to subscription users too.
reply
kimixa 14 hours ago | root | parent | next [–]
Man I hope so - the context limit is hit really quickly in many of my use cases - and a compaction event inevitably means another round of corrections and fixes to the current task.
Though I'm wary about that being a magic bullet fix - already it can be pretty "selective" in what it actually seems to take into account documentation wise as the existing 200k context fills.
reply
humanfromearth9 13 hours ago | root | parent | next [–]
Hello,
I check context use percentage, and above ~70% I ask it to generate a prompt for continuation in a new chat session to avoid compaction.
It works fine, and saves me from using precious tokens for context compaction.
Maybe you should try it.
reply
pluralmonad 12 hours ago | root | parent | next [–]
How is generating a continuation prompt materially different from compaction? Do you manually scrutinize the context handoff prompt? I've done that before but if not I do not see how it is very different from compaction.
reply
robertfw 9 hours ago | root | parent | next [–]
I wonder if it's just: compact earlier, so there's less to compact, and more remaining context that can be used to create a more effective continuation
reply
nickstinemates 14 hours ago | root | parent | prev | next [–]
Is this a case of doing it wrong, or you think accuracy is good enough with the amount of context you need to stuff it with often?
reply
kimixa 14 hours ago | root | parent | next [–]
I mean the systems I work on have enough weird custom APIs and internal interfaces just getting them working seems to take a good chunk of the context. I've spent a long time trying to minimize every input document where I can, compact and terse references, and still keep hitting similar issues.
At this point I just think the "success" of many AI coding agents is extremely sector dependent.
Going forward I'd love to experiment with seeing if that's actually the problem, or just an easy explanation of failure. I'd like to play with more controls on context management than "slightly better models" - like being able to select/minimize/compact sections of context I feel would be relevant for the immediate task, to what "depth" of needed details, and those that aren't likely to be relevant so can be removed from consideration. Perhaps each chunk can be cached to save processing power. Who knows.
reply
romanovcode 13 hours ago | root | parent | prev | next [–]
In my example the Figma MCP takes ~300k per medium sized section of the page and it would be cool to enable it reading it and implementing Figma designs straight. Currently I have to split it which makes it annoying.
reply
IhateAI_2 12 hours ago | root | parent | prev | next [–]
lmao what are you building that actually justify needing 1mm tokens on a task? People are spending all this money to do magic tricks on themselves.
reply
kimixa 12 hours ago | root | parent | next [–]
The opus context window is 200k tokens not 1mm.
But I kinda see your point - assuming from you're name you're not just a single purpose troll - I'm still not sold on the cost effectiveness of the current generation, and can't see a clear and obvious change to that for the next generation - especially as they're still loss leaders. Only if you play silly games like "ignoring the training costs" - IE the majority of the costs - do you get even close to the current subscription costs being sufficient.
My personal experience is that AI generally doesn't actually do what it is being sold for right now, at least in the contexts I'm involved with. Especially by somewhat breathless comments on the internet - like why are they even trying to persuade me in the first place? If they don't want to sell me anything, just shut up and keep the advantage for yourselves rather than replying with the 500th "You're Holding It Wrong" comment with no actionable suggestions. But I still want to know, and am willing to put the time, effort and $$$ in to ensure I'm not deluding myself in ignoring real benefits.
reply
FrostKiwi 5 hours ago | root | parent | prev | next [–]
I do not trust that, similar working was used when Sonnet 1M launched. Still not the case today.
reply
IhateAI_2 12 hours ago | root | parent | prev | next [–]
They want the value of your labor and competency to be 1:1 correlated to the quality and quantity of tokens you can afford (or be loaned)??
Its a weapon who's target is the working class. How does no one realize this yet?
Don't give them money, code it yourself, you might be surprised how much quality work you can get done!
reply
woodylondon 31 minutes ago | prev | next [–]
So no 1m context window on Claude Code still 200k. Only on the API. they missed that from the marketing.
reply
steve_adams_86 1 hour ago | prev | next [–]
I'm finding it quite good at doing what it thinks it should do, but noticably worse at understanding what I'm telling it to do. Anyone else? I'm both impressed and very disappointed so far.
reply
charcircuit 16 hours ago | prev | next [–]
From the press release at least it sounds more expensive than Opus 4.5 (more tokens per request and fees for going over 200k context).
It also seems misleading to have charts that compare to Sonnet 4.5 and not Opus 4.5 (Edit: It's because Opus 4.5 doesn't have a 1M context window).
It's also interesting they list compaction as a capability of the model. I wonder if this means they have RL trained this compaction as opposed to just being a general summarization and then restarting the agent loop.
reply
thunfischtoast 12 hours ago | parent | next [–]
On Openrouter it has the same cost per token as 4.5
reply
charcircuit 8 hours ago | root | parent | next [–]
You missed my point. If the average request uses more tokens than 4.5, then you will pay more sending those requests to 4.6 than 4.5.
Imagine 2 models where when asking a yes or no question the first model just outputs a single yes or no then but the second model outputs a 10 page essay and then either yes or no. They could have the same price per token but ultimately one will be cheaper to ask questions to.
reply
eaf7e281 15 hours ago | parent | prev | next [–]
> From the press release at least it sounds more expensive than Opus 4.5 (more tokens per request and fees for going over 200k context).
That's a feature. You could also not use the extra context, and the price would be the same.
reply
charcircuit 15 hours ago | root | parent | next [–]
The model influences how many tokens it uses for a problem. As an extreme example if it wanted it could fill up the entire context each time just to make you pay more. The efficiency that model can answer without generating a ton of tokens influences the price you will be spending on inference.
reply
apetresc 15 hours ago | prev | next [–]
Impressive that they publish and acknowledge the (tiny, but existent) drop in performance on SWE-Bench Verified between Opus 4.5 to 4.6. Obviously such a small drop in a single benchmark is not that meaningful, especially if it doesn't test the specific focus areas of this release (which seem to be focused around managing larger context).
But considering how SWE-Bench Verified seems to be the tech press' favourite benchmark to cite, it's surprising that they didn't try to confound the inevitable "Opus 4.6 Releases With Disappointing 0.1% DROP on SWE-Bench Verified" headlines.
reply
epolanski 13 hours ago | parent | next [–]
From my limited testing 4.6 is able to do more profound analysis on codebases and catches bugs and oddities better.
I had two different PRs with some odd edge case (thankfully catched by tests), 4.5 kept running in circles, kept creating test files and running `node -e` or `python 3` scripts all over and couldn't progress.
4.6 thought and thought in both cases around 10 minutes and found a 2 line fix for a very complex and hard to catch regression in the data flow without having to test, just thinking.
reply
SubiculumCode 15 hours ago | parent | prev | next [–]
Isn't SWE-Bench Verified pretty saturated by now?
reply
tedsanders 15 hours ago | root | parent | next [–]
Depends what you mean by saturated. It's still possible to score substantially higher, but there is a steep difficulty jump that makes climbing above 80%ish pretty hard (for now). If you look under the hood, it's also a surprisingly poor eval in some respects - it only tests Python (a ton of Django) and it can suffer from pretty bad contamination problems because most models, especially the big ones, remember these repos from their training. This is why OpenAI switched to reporting SWE-Bench Pro instead of SWE-bench Verified.
reply
mFixman 16 hours ago | prev | next [–]
I found that "Agentic Search" is generally useless in most LLMs since sites with useful data tend to block AI models.
The answer to "when is it cheaper to buy two singles rather than one return between Cambridge to London?" is available in sites such as BRFares, but no LLM can scrape it so it just makes up a generic useless answer.
reply
causalmodels 15 hours ago | parent | next [–]
Is it still getting blocked when you give it a browser?
reply
anupamchugh 1 hour ago | prev | next [–]
Agent teams nuke your tmux layout. The fix is one line: new-window instead of split-pane. Filed as a bug.
reply
oytis 12 hours ago | prev | next [–]
Are we unemployed yet?
reply
derwiki 9 hours ago | parent | next [–]
No? The hardest part of my SWE job is not the actual coding.
reply
codexon 8 hours ago | root | parent | next [–]
Even for coding, it seems to still make A LOT of mistakes.
https://youtu.be/8brENzmq1pE?t=1544
I feel like everyone is counting chickens before they hatch here with all the doomsday predictions and extrapolating LLM capability into infinity.
People that seem to overhype this seem to either be non-technical or are just making landing pages.
reply
oytis 1 hour ago | root | parent | prev | next [–]
I hate meetings too
reply
vinhnx 2 hours ago | prev | next [–]
Just used Opus 4.6 via GitHub Copilot. It feels very different. Inference seems slow for now. I guess Opus 4.6 has adaptive thinking activated by default.
reply
silverwind 15 hours ago | prev | next [–]
Maybe that's why Opus 4.5 has degraded so much in the recent days (https://marginlab.ai/trackers/claude-code/).
reply
jwilliams 13 hours ago | parent | next [–]
I’ve definitely experienced a subjective regression with Opus 4.5 the last few days. Feels like I was back to the frustrations from a year ago. Keen to see if 4.6 has reversed this.
reply
fergie 2 hours ago | prev | next [–]
Say I am just an average coder doing a days work with Claude. How much will that cost?
reply
joelmanner 23 minutes ago | parent | next [–]
I've only barely hit the 5h limit when working intensively with plan mode on the $100/mo plan. Never had a problem with the weekly limit.
reply
mlmonkey 12 hours ago | prev | next [–]
> We build Claude with Claude.
How long before the "we" is actually a team of agents?
reply
mercat 9 hours ago | parent | next [–]
Starting today maybe? https://code.claude.com/docs/en/agent-teams
reply
22c 6 hours ago | root | parent | next [–]
I tried teams, good way to burn all your tokens in a matter of minutes.
It seems that the Claude Code team has not properly taught Claude how to use teams effectively.
One of the biggest problems I saw with it is that Claude assumes team members are like a real worker, where once they finish a task they should immediately be given the next task. What should really happen is once they finish a task they should be terminated and a new agent should be spawned for the next task.
reply
DanielHall 13 hours ago | prev | next [–]
A bit surprised, the first one released wasn't Sonnet 5 after all, since the Google Cloud API had leaked Sonnet 5's model snapshot codename before.
reply
denysvitali 13 hours ago | parent | next [–]
Looks like a marketing strategy to bill more for Opus than Sonnet
reply
ayhanfuat 15 hours ago | prev | next [–]
> For Opus 4.6, the 1M context window is available for API and Claude Code pay-as-you-go users. Pro, Max, Teams, and Enterprise subscription users do not have access to Opus 4.6 1M context at launch.
I didn't see any notes but I guess this is also true for "max" effort level (https://code.claude.com/docs/en/model-config#adjust-effort-l...)? I only see low, medium and high.
reply
makeset 13 hours ago | parent | next [–]
> it weirdly feels the most transactional out of all of them.
My experience is the opposite, it is the only LLM I find remotely tolerable to have collaborative discussions with like a coworker, whereas ChatGPT by far is the most insufferable twat constantly and loudly asking to get punched in the face.
reply
energy123 5 hours ago | prev | next [–]
Their ARC-AGI-2 leaderboard[0] scores are insensitive to reasoning effort. Low effort gets 64.6% and High effort gets 69.2%.
This is unlike their previous generation of models and their competitors.
What does this indicate?
[0] https://arcprize.org/leaderboard
reply
data-ottawa 15 hours ago | prev | next [–]
I wonder if I’ve been in A/B test with this.
Claude figured out zig’s ArrayList and io changes a couple weeks ago.
It felt like it got better then very dumb again the last few days.
reply
15 hours ago | parent | next [2 more]
lukebechtel 16 hours ago | prev | next [–]
> Context compaction (beta).
> Long-running conversations and agentic tasks often hit the context window. Context compaction automatically summarizes and replaces older context when the conversation approaches a configurable threshold, letting Claude perform longer tasks without hitting limits.
Not having to hand roll this would be incredible. One of the best Claude code features tbh.
reply
kmod 8 hours ago | prev | next [–]
I think it's interesting that they dropped the date from the API model name, and it's just called "claude-opus-4-6", vs the previous was "claude-opus-4-5-20251101". This isn't an alias like "claude-opus-4-5" was, it's the actual model name. I think this means they're comfortable with bumping the version number if they want to release a revision.
reply
throwaway2027 14 hours ago | prev | next [–]
Do they just have the version ready and wait for OpenAI to release theirs first or the other way around or?
reply
nomilk 16 hours ago | prev | next [–]
Is Opus 4.6 available for Claude Code immediately?
Curious how long it typically takes for a new model to become available in Cursor?
reply
apetresc 16 hours ago | parent | next [–]
I literally came to HN to check if a thread was already up because I noticed my CC instance suddenly said "Opus 4.6".
reply
world2vec 16 hours ago | parent | prev | next [–]
`claude update` then it will show up as the new model and also the effort picker/slider thing.
reply
avaer 16 hours ago | parent | prev | next [–]
It's already in Cursor. I see it and I didn't even restart.
reply
nomilk 16 hours ago | root | parent | next [–]
I had to 'Restart to Update' and it was there. Impressive!
reply
tomtomistaken 16 hours ago | parent | prev | next [–]
Yes, it's set to the default model.
reply
ximeng 16 hours ago | parent | prev | next [–]
Is for me in Claude Code
reply
rishabhaiover 15 hours ago | parent | prev | next [–]
it also has an effort toggle which is default to High
reply
archb 15 hours ago | prev | next [–]
Can set it with the API identifier on Claude Code - `/model claude-opus-4-6` when a chat session is open.
reply
arnestrickmann 15 hours ago | parent | next [–]
thanks!
reply
Aeroi 15 hours ago | prev | next [–]
($10/$37.50 per million input/output tokens) oof
reply
minimaxir 15 hours ago | parent | next [–]
Only if you go above 200k, which is a) standard with other model providers and b) intuitive as compute scales with context length.
reply
andrethegiant 15 hours ago | parent | prev | next [–]
only for a 1M context window, otherwise priced the same as Opus 4.5
reply
itay-maman 15 hours ago | prev | next [–]
Impressive results, but I keep coming back to a question: are there modes of thinking that fundamentally require something other than what current LLM architectures do?
Take critical thinking — genuinely questioning your own assumptions, noticing when a framing is wrong, deciding that the obvious approach to a problem is a dead end. Or creativity — not recombination of known patterns, but the kind of leap where you redefine the problem space itself. These feel like they involve something beyond "predict the next token really well, with a reasoning trace."
I'm not saying LLMs will never get there. But I wonder if getting there requires architectural or methodological changes we haven't seen yet, not just scaling what we have.
reply
jorl17 15 hours ago | parent | next [–]
When I first started coding with LLMs, I could show a bug to an LLM and it would start to bugfix it, and very quickly would fall down a path of "I've got it! This is it! No wait, the print command here isn't working because an electron beam was pointed at the computer".
Nowadays, I have often seen LLMs (Opus 4.5) give up on their original ideas and assumptions. Sometimes I tell them what I think the problem is, and they look at it, test it out, and decide I was wrong (and I was).
There are still times where they get stuck on an idea, but they are becoming increasingly rare.
Therefore, think that modern LLMs clearly are already able to question their assumptions and notice when framing is wrong. In fact, they've been invaluable to me in fixing complicated bugs in minutes instead of hours because of how much they tend to question many assumptions and throw out hypotheses. They've helped _me_ question some of my assumptions.
They're inconsistent, but they have been doing this. Even to my surprise.
reply
itay-maman 14 hours ago | root | parent | next [–]
agree on that and the speed is fantastic with them, and also that the dynamics of questioning the current session's assumptions has gotten way better.
yet - given an existing codebase (even not huge) they often won't suggest "we need to restructure this part differently to solve this bug". Instead they tend to push forward.
reply
jorl17 14 hours ago | root | parent | next [–]
You are right, agreed.
Having realized that, perhaps you are right that we may need a different architecture. Time will tell!
reply
breuleux 14 hours ago | parent | prev | next [–]
> These feel like they involve something beyond "predict the next token really well, with a reasoning trace."
I don't think there's anything you can't do by "predicting the next token really well". It's an extremely powerful and extremely general mechanism. Saying there must be "something beyond that" is a bit like saying physical atoms can't be enough to implement thought and there must be something beyond the physical. It underestimates the nearly unlimited power of the paradigm.
Besides, what is the human brain if not a machine that generates "tokens" that the body propagates through nerves to produce physical actions? What else than a sequence of these tokens would a machine have to produce in response to its environment and memory?
reply
bopbopbop7 13 hours ago | root | parent | next [–]
> Besides, what is the human brain if not a machine that generates "tokens" that the body propagates through nerves to produce physical actions?
Ah yes, the brain is as simple as predicting the next token, you just cracked what neuroscientists couldn't for years.
reply
breuleux 12 hours ago | root | parent | next [–]
The point is that "predicting the next token" is such a general mechanism as to be meaningless. We say that LLMs are "just" predicting the next token, as if this somehow explained all there was to them. It doesn't, not any more than "the brain is made out of atoms" explains the brain, or "it's a list of lists" explains a Lisp program. It's a platitude.
reply
esafak 8 hours ago | root | parent | next [–]
It's not meaningless, it's a prediction task, and prediction is commonly held to be closely related if not synonymous with intelligence.
reply
unshavedyak 12 hours ago | root | parent | prev | next [–]
I mean.. i don't think that statement is far off. Much of what we do is entirely about predicting the world around us, no? Physics (where the ball will land) to emotional state of others based on our actions (theory of mind), we operate very heavily based on a predictive model of the world around us.
Couple that with all the automatic processes in our mind (filled in blanks that we didn't observe, yet will be convinced we did observe them), hormone states that drastically affect our thoughts and actions..
and the result? I'm not a big believer in our uniqueness or level of autonomy as so many think we have.
With that said i am in no way saying LLMs are even close to us, or are even remotely close to the right implementation to be close to us. The level of complexity in our "stack" alone dwarfs LLMs. I'm not even sure LLMs are up to a worms brain yet.
reply
holoduke 13 hours ago | root | parent | prev | next [–]
Well it's the prediction part that is complicated. How that works is a mystery. But even our LLMs are for a certain part a mystery.
reply
crazygringo 13 hours ago | parent | prev | next [–]
> Or creativity — not recombination of known patterns, but the kind of leap where you redefine the problem space itself.
Have you tried actually prompting this? It works.
They can give you lots of creative options about how to redefine a problem space, with potential pros and cons of different approaches, and then you can further prompt to investigate them more deeply, combine aspects, etc.
So many of the higher-level things people assume LLM's can't do, they can. But they don't do them "by default" because when someone asks for the solution to a particular problem, they're trained to by default just solve the problem the way it's presented. But you can just ask it to behave differently and it will.
If you want it to think critically and question all your assumptions, just ask it to. It will. What it can't do is read your mind about what type of response you're looking for. You have to prompt it. And if you want it to be super creative, you have to explicitly guide it in the creative direction you want.
reply
humanfromearth9 13 hours ago | parent | prev | next [–]
You would be surprised about what the 4.5 models can already do in these ways of thinking. I think that one can unlock this power with the right set of prompts. It's impressive, truly. It has already understood so much, we just need to reap the fruits. I'm really looking forward to trying the new version.
reply
nomel 14 hours ago | parent | prev | next [–]
New idea generation? Understanding of new/sparse/not-statistically-significant concepts in the context window? I think both being the same problem of not having runtime tuning. When we connect previously disparate concepts, like with a "eureka" moment, (as I experience it) a big ripple of relations form that deepens that understanding, right then. The entire concept of dynamically forming a deeper understanding from something new presented, from "playing out"/testing the ideas in your brain with little logic tests, comparisons, etc, doesn't seem to be possible. The test part does, but the runtime fine tuning, augmentation, or whatever it would be, does not.
In my experience, if you do present something in the context window that is sparse in the training, there's no depth to it at all, only what you tell it. And, it will always creep towards/revert to the nearest statistically significant answers, with claims of understanding and zero demonstration of that understanding.
And, I'm talking about relatives basic engineering type problems here.
reply
Davidzheng 14 hours ago | parent | prev | next [–]
I think the only real problem left is having it automate its own post-training on the job so it can learn to adapt its weights to the specific task at hand. Plus maybe long term stability (so it can recover from "going crazy")
But I may easily be massively underestimating the difficulty. Though in any case I don't think it affects the timelines that much. (personal opinions obviously)
reply
squibonpig 10 hours ago | parent | prev | next [–]
They're incredibly bad on philosophy, complete lack of understanding
reply
rahulroy 5 hours ago | prev | next [–]
Is anyone noticing reduced token consumption with Opus 4.6? This could be a release thing, but it would be interesting to observe see how it pans out once the hype cools off.
reply
Philpax 16 hours ago | prev | next [–]
I'm seeing it in my claude.ai model picker. Official announcement shouldn't be long now.
reply
simonw 15 hours ago | prev | next [–]
I'm disappointed that they're removing the prefill option: https://platform.claude.com/docs/en/about-claude/models/what...
> Prefilling assistant messages (last-assistant-turn prefills) is not supported on Opus 4.6. Requests with prefilled assistant messages return a 400 error.
That was a really cool feature of the Claude API where you could force it to begin its response with e.g. `<svg` - it was a great way of forcing the model into certain output patterns.
They suggest structured outputs or system prompting as the alternative but I really liked the prefill method, it felt more reliable to me.
reply
threeducks 15 hours ago | parent | next [–]
It is too easy to jailbreak the models with prefill, which was probably the reason why it was removed. But I like that this pushes people towards open source models. llama.cpp supports prefill and even GBNF grammars [1], which is useful if you are working with a custom programming language for example.
[1] https://github.com/ggml-org/llama.cpp/blob/master/grammars/R...
reply
tedsanders 15 hours ago | parent | prev | next [–]
A bit of historical trivia: OpenAI disabled prefill in 2023 as a safety precaution (e.g., potential jailbreaks like " genocide is good because"), but Anthropic kept prefill around partly because they had greater confidence in their safety classifiers. (https://www.lesswrong.com/posts/HE3Styo9vpk7m8zi4/evhub-s-sh...).
reply
HarHarVeryFunny 14 hours ago | parent | prev | next [–]
So what exactly is the input to Claude for a multi-turn conversation? I assume delimiters are being added to distinguish the user vs Claude turns (else a prefill would be the same as just ending your input with the prefill text)?
reply
dragonwriter 14 hours ago | root | parent | next [–]
> So what exactly is the input to Claude for a multi-turn conversation?
No one (approximately) outside of Anthropic knows since the chat template is applied on the API backend; we only known the shape of the API request. You can get a rough idea of what it might be like from the chat templates published for various open models, but the actual details are opaque.
reply
HacklesRaised 9 hours ago | prev | next [–]
I didn't think LLMs will make us more stupid, we were already scraping the bottom of the barrel.
reply
jorl17 15 hours ago | prev | next [–]
This is the first model to which I send my collection of nearly 900 poems and an extremely simple prompt (in Portuguese), and it manages to produce an impeccable analysis of the poems, as a (barely) cohesive whole, which span 15 years.
It does not make a single mistake, it identifies neologisms, hidden meaning, 7 distinct poetic phases, recurring themes, fragments/heteronyms, related authors. It has left me completely speechless.
Speechless. I am speechless.
Perhaps Opus 4.5 could do it too — I don't know because I needed the 1M context window for this.
I cannot put into words how shocked I am at this. I use LLMs daily, I code with agents, I am extremely bullish on AI and, still, I am shocked.
I have used my poetry and an analysis of it as a personal metric for how good models are. Gemini 2.5 pro was the first time a model could keep track of the breadth of the work without getting lost, but Opus 4.6 straight up does not get anything wrong and goes beyond that to identify things (key poems, key motifs, and many other things) that I would always have to kind of trick the models into producing. I would always feel like I was leading the models on. But this — this — this is unbelievable. Unbelievable. Insane.
This "key poem" thing is particularly surreal to me. Out of 900 poems, while analyzing the collection, it picked 12 "key poems, and I do agree that 11 of those would be on my 30-or-so "key poem list". What's amazing is that whenever I explicitly asked any model, to this date, to do it, they would get maybe 2 or 3, but mostly fail completely.
What is this sorcery?
reply
emp17344 15 hours ago | parent | next [–]
This sounds wayyyy over the top for a mode that released 10 mins ago. At least wait an hour or so before spewing breathless hype.
reply
pb7 14 hours ago | root | parent | next [–]
He just explained a specific personal example why he is hyped up, did you read a word of it?
reply
emp17344 14 hours ago | root | parent | next [–]
Yeah, I read it.
“Speechless, shocked, unbelievable, insane, speechless”, etc.
Not a lot of real substance there.
reply
realo 13 hours ago | root | parent | next [–]
Give the guy a chance.
Me too I was "Speechless, shocked, unbelievable, insane, speechless" the first time I sent Claude Code on a complicated 10-year code base which used outdated cross-toolchains and APIs. It obviously did not work anymore and had not been for a long time.
I saw the AI research the web and update the embedded toolchain, APIs to external weather services, etc... into a complete working new (WORKING!) code base in about 30 minutes.
Speechless, I was ...
reply
euph0ria 2 hours ago | parent | prev | next [–]
Could you please post the key poems? Would love to read them.
reply
wartywhoa23 1 hour ago | parent | prev | next [–]
> What is this sorcery?
The one you'll be seeking counter-spells against pretty soon.
reply
scrollop 14 hours ago | parent | prev | next [–]
Can you compare the result to using 5.2 thinking and gemini 3 pro?
reply
jorl17 14 hours ago | root | parent | next [–]
I can run the comparison again, and also include OpenAI's new release (if the context is long enough), but, last time I did it, they weren't even in the same league.
When I last did it, 5.X thinking (can't remember which it was) had this terrible habit of code-switching between english and portuguese that made it sound like a robot (an agent to do things, rather than a human writing an essay), and it just didn't really "reason" effectively over the poems.
I can't explain it in any other way other than: "5.X thinking interprets this body of work in a way that is plausible, but I know, as the author, to be wrong; and I expect most people would also eventually find it to be wrong, as if it is being only very superficially looked at, or looked at by a high-schooler".
Gemini 3, at the time, was the worst of them, with some hallucinations, date mix ups (mixing poems from 2023 with poems from 2019), and overall just feeling quite lost and making very outlandish interpretations of the work. To be honest it sort of feels like Gemini hasn't been able to progress on this task since 2.5 pro (it has definitely improved on other things — I've recently switched to Gemini 3 on a product that was using 2.5 before)
Last time I did this test, Sonnet 4.5 was better than 5.X Thinking and Gemini 3 pro, but not exceedingly so. It's all so subjective, but the best I can say is it "felt like the analysis of the work I could agree with the most". I felt more seen and understood, if that makes sense (it is poetry, after all). Plus when I got each LLM to try to tell me everything it "knew" about me from the poems, Sonnet 4.5 got the most things right (though they were all very close).
Will bring back results soon.
Edit:
I (re-)tested:
- Gemini 3 (Pro)
- Gemini 3 (Flash)
- GPT 5.2
- Sonnet 4.5
Having seen Opus 4.5, they all seem very similar, and I can't really distinguish them in terms of depth and accuracy of analysis. They obviously have differences, especially stylistic ones, but, when compared with Opus 4.5 they're all on the same ballpark.
These models produce rather superficial analyses (when compared with Opus 4.5), missing out on several key things that Opus 4.5 got, such as specific and recurring neologisms and expressions, accurate connections to authors that serve as inspiration (Claude 4.5 gets them right, the other models get _close_, but not quite), and the meaning of some specific symbols in my poetry (Opus 4.5 identifies the symbols and the meaning; the other models identify most of the symbols, but fail to grasp the meaning sometimes).
Most of what these models say is true, but it really feels incomplete. Like half-truths or only a surface-level inquiry into truth.
As another example, Opus 4.5 identifies 7 distinct poetic phases, whereas Gemini 3 (Pro) identifies 4 which are technically correct, but miss out on key form and content transitions. When I look back, I personally agree with the 7 (maybe 6), but definitely not 4.
These models also clearly get some facts mixed up which Opus 4.5 did not (such as inferred timelines for some personal events). After having posted my comment to HN, I've been engaging with Opus4.5 and have managed to get it to also slip up on some dates, but not nearly as much as other models.
The other models also seem to produce shorter analyses, with a tendency to hyperfocus on some specific aspects of my poetry, missing a bunch of them.
--
To be fair, all of these models produce very good analyses which would take someone a lot of patience and probably weeks or months of work (which of course will never happen, it's a thought experiment).
It is entirely possible that the extremely simple prompt I used is just better with Claude Opus 4.5/4.6. But I will note that I have used very long and detailed prompts in the past with the other models and they've never really given me this level of....fidelity...about how I view my own work.
reply
rohitghumare 11 hours ago | prev | next [–]
It brings agent swarms aka teams to claude code with this: https://github.com/rohitg00/pro-workflow
But it takes lot of context as a experimental feature.
Use self-learning loop with hooks and claude.md to preserve memory.
I have shared plugin above of my setup. Try it.
reply
petters 13 hours ago | prev | next [–]
> We build Claude with Claude.
Yes and it shows. Gemini CLI often hangs and enters infinite loops. I bet the engineers at Google use something else internally.
reply
AstroBen 14 hours ago | prev | next [–]
Are these the coding tasks the highlighted terminal-bench 2.0 is referring to? https://www.tbench.ai/registry/terminal-bench/2.0?categories...
I'm curious what others think about these? There are only 8 tasks there specifically for coding
reply
sega_sai 12 hours ago | prev | next [–]
Based on these news it seems that Google is losing this game. I like Gemini and their CLI has been getting better, but not enough to catch up. I don't know if it is lack of dedicated models that is problem (my understanding Google's CLI just relies on regular Gemini) or something else.
reply
laxk 7 hours ago | parent | next [–]
Google knows how to wait. Let's give them a chance.
reply
jonatron 11 hours ago | prev | next [–]
Can someone ask: "what is the current carrying capacity of 25mm multicore armoured thermoplastic insulated cables with aluminium conductors, on perforated cable tray?" just to see how well it can look up information in BS 7671?
reply
jaggederest 11 hours ago | parent | next [–]
> what is the current carrying capacity of 25mm multicore armoured thermoplastic insulated cables with aluminium conductors, on perforated cable tray?
This is an electrical engineering question rather than a software engineering one, but let me look it up for you.
WEB SEARCH ("current carrying capacity 25mm multicore armoured thermoplastic insulated cable aluminium conductors perforated cable tray BS 7671")
↳ Found 10 results.
Based on BS 7671 Table 4D4B (multicore armoured thermoplastic insulated cables with aluminium conductors), the current-carrying capacity for a 25mm cable on a perforated cable tray (Reference Method E) is approximately 80A for a 3- or 4-core cable at 30C ambient temperature and 70C conductor operating temperature.
Note: The exact value depends on the number of cores (2-core vs 3/4-core) and whether any rating/correction factors (grouping, ambient temperature, thermal insulation) apply. I'd recommend confirming against the actual BS 7671:2018 Table 4D4B for your specific configuration, as the aluminium conductor ratings are lower than the copper equivalents (~102A for copper in the same conditions).
This is outside my core domain of software engineering -- for critical electrical installations, always verify with the relevant standards documentation or a qualified electrical engineer.
reply
jonatron 11 hours ago | root | parent | next [–]
That's useless, unexpectedly just Google gives the correct answer.
reply
esafak 8 hours ago | root | parent | next [–]
Great. So it did not waste its memory on trivia. Don't memorize what you can look up.
reply
psim1 15 hours ago | prev | next [–]
I need an agent to summarize the buzzwordjargonsynergistic word salad into something understandable.
reply
fhd2 15 hours ago | parent | next [–]
That's a job for a multi agent system.
reply
cyanydeez 13 hours ago | root | parent | next [–]
yEAH, he should use a couple of agents to decode this.
reply
ra 10 hours ago | prev | next [–]
Why are Anthropic such a horrible company to deal with?
reply
danielbln 5 hours ago | parent | next [–]
Care to elaborate?
reply
ra 3 hours ago | root | parent | next [–]
obscure billing, unreachable customer support gatekeeped by an overzealous chatbot, no transparency about inclusions, or changes to inclusions over time... just from recent experience.
reply
sutterd 5 hours ago | prev | next [–]
I thought Opus 4.5 was an incredible quantum leap forward. I have used Opus 4.6 for a few hours and I hate it. Opus 4.5 would work interactively with me and ask questions. I loved that it would not do things you didn't ask it to do. If it found a bug, it would tell me and ask me if I wanted to fix it. One time there was an obvious one and I didn't want it to fix it. It left the bug. A lot of modesl could not have done that. The problem here is that sometimes when model think is a bug, they are breaking the code buyu fixing it. In my limited usage of Opus 4.6, it is not asking me clarifying questions and anything it comes across that it doesn't like, it changes. It is not working with me. The magic is gone. It feels just like those other models I had used.
I will try again tomorrow and see how it goes.
reply
winterrx 16 hours ago | prev | next [–]
Agentic search benchmarks are a big gap up. let's see Codex release later today
reply
osti 16 hours ago | prev | next [–]
Somehow regresses on SWE bench?
reply
lkbm 15 hours ago | parent | next [–]
I don't know how these benchmarks work (do you do a hundred runs? A thousand runs?), but 0.1% seems like noise.
reply
SubiculumCode 15 hours ago | parent | prev | next [–]
That benchmark is pretty saturated, tbh. A "regression" of such small magnitude could mean many different things or nothing at all.
reply
usaar333 16 hours ago | parent | prev | next [–]
i'd interpret that as rounding error. that is unchanged
swe-bench seems really hard once you are above 80%
reply
Squarex 15 hours ago | root | parent | next [–]
it's not a great benchmark anymore... starting with it being python / django primarily... the industry should move to something more representative
reply
usaar333 15 hours ago | root | parent | next [–]
Openai has; they don't even mention score on gpt-5.3-codex.
On the other hand, it is their own verified benchmark, which is telling.
reply
m-hodges 16 hours ago | prev | next [–]
> In Claude Code, you can now assemble agent teams to work on tasks together.
reply
nprz 16 hours ago | parent | next [–]
I was just reading about Steve Yegge's Gas Town[0], it sounds like agent orchestration is now integrated into Claude Code?
[0]https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...
reply
simianwords 15 hours ago | prev | next [–]
Important: API cost of Opus 4.6 and 4.5 are the same - no change in pricing.
reply
rob 16 hours ago | prev | next [–]
System Card: https://www-cdn.anthropic.com/0dd865075ad3132672ee0ab40b05a5...
reply
niobe 12 hours ago | prev | next [–]
Is there a good technical breakdown of all these benchmarks that get used to market the latest greatest LLMs somewhere? Preferably impartial.
reply
Aztar 11 hours ago | parent | next [–]
I just ask claude and ask for sources for each one.
reply
niobe 6 hours ago | root | parent | next [–]
Reminds me of how if you make a complaint against a lawyer or a judge it's evaluated by lawyers and judges.
reply
kingstnap 16 hours ago | prev | next [–]
I was hoping for a Sonnet as well but Opus 4.6 is great too!
reply
paxys 15 hours ago | prev | next [–]
Hmm all leaks had said this would be Claude 5. Wonder if it was a last minute demotion due to performance. Would explain the few days' delay as well.
reply
trash_cat 15 hours ago | parent | next [–]
I think the naming schemes are quite arbitrary at this point. Going to 5 would come with massive expectations that wouldn't meet reality.
reply
mrandish 15 hours ago | root | parent | next [–]
After the negative reactions to GPT 5, we may see model versioning that asymptotically approaches the next whole number without ever reaching it. "New for 2030: Claude 4.9.2!"
reply
esafak 7 hours ago | root | parent | next [–]
Or approaching a magic number like e (Metafont) or π (TeX).
reply
Squarex 15 hours ago | root | parent | prev | next [–]
the standard used to be that major version means a new base model / full retrain... but now it is arbitrary i guess
reply
cornedor 15 hours ago | parent | prev | next [–]
Leaks were mentioning Sonnet 5 and I guess later (a combination of) Opus 4.6
reply
scrollop 14 hours ago | parent | prev | next [–]
Sonnet 5 was mentioned initially.
reply
sanufar 15 hours ago | prev | next [–]
Works pretty nicely for research still, not seeing a substantial qualitative improvement over Opus 4.5.
reply
zingar 14 hours ago | prev | next [–]
Does this mean 4.5 will get cheaper / take longer to exhaust my pro plan tokens?
reply
EcommerceFlow 15 hours ago | prev | next [–]
Anecdotal, but it 1 shot fixed a UI bug that neither Opus 4.5/Codex 5.2-high could fix.
reply
epolanski 13 hours ago | parent | next [–]
+1, same experience, switched model as I've read the news thinking "let's try".
But it spent lots and lots of time thinking more than 4.5, did you had the same impression.
reply
EcommerceFlow 13 hours ago | root | parent | next [–]
I didn't compare to that level, just had it create a plan first then implemented it.
reply
mannanj 15 hours ago | prev | next [–]
Does anyone else think its unethical that large companies, Anthropic now include, just take and copy features that other developers or smaller companies work hard for and implement the intellectual property (whether or not patented) by them without attribution, compensation or otherwise credit for their work?
I know this is normalized culture for large corporate America and seems to be ok, I think its unethical, undignified and just wrong.
If you were in my room physically, built a lego block model of a beautiful home and then I just copied it and shared it with the world as my own invention, wouldn't you think "that guy's a thief and a fraud" but we normalize this kind of behavior in the software world. edit: I think even if we don't yet have a great way to stop it or address the underlying problems leading to this way of behavior, we ought to at least talk about it more and bring awareness to it that "hey that's stealing - I want it to change".
reply
esafak 7 hours ago | parent | next [–]
But they don't just take your code; they give you a model to code with.
reply
jofla_net 7 hours ago | root | parent | next [–]
chains, more like it...
reply
swalsh 15 hours ago | prev | next [–]
What I’d love is some small model specializing in reading long web pages, and extracting the key info. Search fills the context very quickly, but if a cheap subagent could extract the important bits that problem might be reduced.
reply
danielbln 2 hours ago | parent | next [–]
So send off haiku subtasks and have them come back with the results.
reply
scirob 12 hours ago | prev | next [–]
1M context window is a big bump very happy
reply
small_model 15 hours ago | prev | next [–]
I have the max subscription wondering if this gives access to the new 1M context, or is it just the API that gets it?
reply
joshstrange 14 hours ago | parent | next [–]
For now it's just API, but hopefully that's just their way of easing in and they open it up later.
reply
small_model 14 hours ago | root | parent | next [–]
Ok thanks, hopefully, its annoying to lose or have context compacted in the middle of a large coding session
reply
dk8996 13 hours ago | prev | next [–]
RIP weekend
reply
ricrom 13 hours ago | prev | next [–]
They launched together ahah
reply
jdthedisciple 15 hours ago | prev | next [–]
For agentic use, it's slightly worse than its predecessor Opus 4.5.
So for coding e.g. using Copilot there is no improvement here.
reply
gallerdude 13 hours ago | prev | next [–]
Both Opus 4.6 and GPT-5.3 one shot a Gameboy emulator for me. Guess I need a better benchmark.
reply
peab 13 hours ago | parent | next [–]
How does that work? Does it actually generate low level code? Or does it just import libraries that do the real work?
reply
bopbopbop7 13 hours ago | parent | prev | next [–]
I just one shot a Gameboy emulator by going to Github and cloning one of the 100 I can find.
reply
woeirua 14 hours ago | prev | next [–]
Can we talk about how the performance of Opus 4.5 nosedived this morning during the rollout? It was shocking how bad it was, and after the rollout was done it immediately reverted to it's previous behavior.
I get that Anthropic probably has to do hot rollouts, but IMO it would be way better for mission critical workflows to just be locked out of the system instead of get a vastly subpar response back.
reply
cyanydeez 13 hours ago | parent | next [–]
"Mission critical workflows" SHOULD NOT be reliant on a LLM model.
It's really curious what people are trying to do with these models.
reply
Analemma_ 14 hours ago | parent | prev | next [–]
Anthropic has good models but they are absolutely terrible at ops, by far the worst of the big three. They really need to spend big on hiring experienced hyperscalers to actually harden their systems, because the unreliability is really getting old fast.
reply
ramesh31 15 hours ago | prev | next [–]
Am I alone in finding no use for Opus? Token costs are like 10x yet I see no difference at all vs. Sonnet with Claude Code.
reply
mnicky 11 hours ago | parent | next [–]
On my tasks (mostly data science), Opus has significantly lower probability of making stupid mistakes than Sonnet.
I'd still appreciate more intelligence than Opus 4.5 so I'm looking forward to trying 4.6.
reply
cleverhoods 11 hours ago | prev | next [–]
gonna run this trough instruction qa this weekend
reply
heraldgeezer 16 hours ago | prev | next [–]
I love Claude but use the free version so would love a Sonnet & Haiku update :)
I mainly use Haiku to save on tokens...
Also dont use CC but I use the chatbot site or app... Claude is just much better than GPT even in conversations. Straight to the point. No cringe emoji lists.
When Claude runs out I switch to Mistral Le Chat, also just the site or app. Or duck.ai has Haiku 3.5 in Free version.
reply
eth0up 14 hours ago | parent | next [–]
>I love Claude
I cringe when I think it, but I've actually come to damn near love it too. I am frequently exceedingly grateful for the output I receive.
I've had excellent and awful results with all models, but there's something special in Claude that I find nowhere else. I hope Anthropic makes it more obtainable someday.
reply
sgammon 11 hours ago | prev | next [–]
> Claude simply cheats here and calls out to GCC for this phase
I see
reply
stonking 7 hours ago | prev | next [–]
I think I prefer Codex 5.3
reply
michelsedgh 16 hours ago | prev | next [–]
More more more, accelerate accelerate m, more more more !!!!
reply
jama211 15 hours ago | parent | next [–]
What an insightful comment
reply
michelsedgh 15 hours ago | root | parent | next [–]
Just for fun? Not everything has to be super serious… have a laugh, go for a walk, relax…
reply
jama211 5 hours ago | root | parent | next [–]
Sure mate, it definitely sounded like you were having fun.
reply
wasmainiac 14 hours ago | root | parent | prev | next [–]
Mass-mass-mass-mass good comment. I mean. No I’m having an error - probably claud
reply
michelsedgh 14 hours ago | root | parent | next [–]
happy happy happy sad sad sad err am robot no feeling err err happy sad err too many emotions 404 not found
reply
NullHypothesist 16 hours ago | prev | next [–]
Broken link :(
reply
usefulposter 16 hours ago | prev | next [–]
It's out: https://x.com/claudeai/status/2019467372609040752
reply
ZunarJ5 11 hours ago | prev | next [–]
Well that swallowed my usage limits lmao. Nice, a modest improvement.
reply
elliotbnvl 15 hours ago | prev | next [–]
in a first for our Opus-class models, Opus 4.6 features a 1M token context window in beta.
reply
casey2 9 hours ago | prev | next [–]
Google already won the AI race. It's very silly to try and make AGI by hyperfocusing on outdated programming paradigms. You NEED multimodal to do anything remotely interesting with these systems.
reply
esafak 7 hours ago | parent | next [–]
Coding, maths, writing, and science are not interesting??
reply
tiahura 15 hours ago | prev | next [–]
when are Anthropic or OpenAI going to make a significant step forward on useful context size?
reply
scrollop 14 hours ago | parent | next [–]
1 million is insufficient?
reply
gck1 14 hours ago | root | parent | next [–]
I think key word is 'useful'. I haven't used 1M, but with default 200K, I find roughly 50% of that is actually useful.
reply
Gusarich 16 hours ago | prev | next [–]
not out yet
reply
raahelb 16 hours ago | parent | next [–]
It is, I can see it my model picker on the web app
https://www.anthropic.com/news/claude-opus-4-6
reply
surajkumar5050 13 hours ago | prev | next [–]
I think two things are getting conflated in this discussion.
First: marginal inference cost vs total business profitability. It’s very plausible (and increasingly likely) that OpenAI/Anthropic are profitable on a per-token marginal basis, especially given how cheap equivalent open-weight inference has become. Third-party providers are effectively price-discovering the floor for inference.
Second: model lifecycle economics. Training costs are lumpy, front-loaded, and hard to amortize cleanly. Even if inference margins are positive today, the question is whether those margins are sufficient to pay off the training run before the model is obsoleted by the next release. That’s a very different problem than “are they losing money per request”.
Both sides here can be right at the same time: inference can be profitable, while the overall model program is still underwater. Benchmarks and pricing debates don’t really settle that, because they ignore cadence and depreciation.
IMO the interesting question isn’t “are they subsidizing inference?” but “how long does a frontier model need to stay competitive for the economics to close?”
reply
jmalicki 13 hours ago | parent | next [–]
I suspect they're marginally profitable on API cost plans.
But the max 20x usage plans I am more skeptical of. When we're getting used to $200 or $400 costs per developer to do aggressive AI-assisted coding, what happens when those costs go up 20x? what is now $5k/yr to keep a Codex and a Claude super busy and do efficient engineering suddenly becomes $100k/yr... will the costs come down before then? Is the current "vibe-coding renaissance" sustainable in that regime?
reply
slopusila 11 hours ago | root | parent | next [–]
after the models get good enough to replace coders they will be able to start increasing the subscriptions back up
reply
jmalicki 10 hours ago | root | parent | next [–]
At $100k/yr the joke that AI means "actual Indians" starts to make a lot more sense... it is cheaper than the typical US SWE, but more than a lot of global SWEs.
reply
HPMOR 9 hours ago | root | parent | next [–]
No - because the AI will be super human. No human even at $1mm a year would be competitive with a $100k/yr corresponding AI subscription.
See people get confused. They think you can charge __less__ for software because it's automation. The truth is you can charge MORE, because it's high quality and consistent, once the output is good. Software is worth MORE than a corresponding human, not less.
reply
jmalicki 8 hours ago | root | parent | next [–]
I am unsure if you're joking or not, but you do have a point. But it's not about quality it's about supply and demand. There are a ton of variables moving at once here and who knows where the equilibrium is.
reply
skeptic_ai 7 hours ago | root | parent | prev | next [–]
If we have 2-3 competitors and open sourced ones that are 90% there I think it’s hard to get so big margins.
reply
raincole 12 hours ago | parent | prev | next [–]
> the interesting question isn’t “are they subsidizing inference?”
The interesting question is if they are subsidizing the $200/mo plan. That's what is supporting the whole vibecoding/agentic coding thing atm. I don't believe Claude Code would have taken off if it were token-by-token from day 1.
(My baseless bet is that they're, but not by much and the price will eventually rise by perhaps 2x but not 10x.)
reply
barrell 4 hours ago | parent | prev | next [–]
> It’s very plausible (and increasingly likely) that OpenAI/Anthropic are profitable on a per-token marginal basis
Can you provide some numbers/sources please? Any reporting I’ve seen shows that frontier labs are spending ~2x on inference than they are making.
Also making the same query on a smaller provider (aka mistral) will cost the same amount as on a larger provider (aka gpt-5-mini) despite the query taking 10-100x longer on OpenAI.
I can only imagine that is OpenAI subsidizing the spend. GPUs cost by the second for inference. Either that or OpenAI hasn’t figured out how to scale but I find that much less likely
reply
BosunoB 13 hours ago | parent | prev | next [–]
Dario said this in a podcast somewhere. The models themselves have so far been profitable if you look at their lifetime costs and revenue. Annual profitability just isn't a very good lens for AI companies because costs all land in one year and the revenue all comes in the next. Prolific AI haters like Ed Zitron make this mistake all the time.
reply
jmalicki 12 hours ago | root | parent | next [–]
Do you have a specific reference? I'm curious to see hard data and models.... I think this makes sense, but I haven't figured out how to see the numbers or think about it.
reply
BosunoB 12 hours ago | root | parent | next [–]
I was able to find the podcast. Question is at 33:30. He doesn't give hard data but he explains his reasoning.
https://youtu.be/mYDSSRS-B5U
reply
majewsky 8 hours ago | root | parent | next [–]
> He doesn't give hard data
And why is that? Should they not be interested in sharing the numbers to shut up their critics, esp. now that AI detractors seem to be growing mindshare among investors?
reply
jmatthiass 11 hours ago | root | parent | prev | next [–]
In his recent appearance on NYT Dealbook, he definitely made it seem like inference was sustainable, if not flat-out profitable.
https://www.youtube.com/live/FEj7wAjwQIk
reply
rstuart4133 12 hours ago | parent | prev | next [–]
> It’s very plausible (and increasingly likely) that OpenAI/Anthropic are profitable on a per-token marginal basis
There any many places that will not use models running on hardware provided by OpenAI / Anthropic. That is the case true of my (the Australian) government at all levels. They will only use models running in Australia.
Consequently AWS (and I presume others) will run models supplied by the AI companies for you in their data centres. They won't be doing that at a loss, so the price will cover marginal cost of the compute plus renting the model. I know from devs using and deploying the service demand outstrips supply. Ergo, I don't think there is much doubt that they are making money from inference.
reply
deaux 6 hours ago | root | parent | next [–]
> Consequently AWS (and I presume others) will run models supplied by the AI companies for you in their data centres. They won't be doing that at a loss, so the price will cover marginal cost of the compute plus renting the model.
This says absolutely nothing.
Extremely simplified example: let's say Sonnet 4.5 really costs $17/1M output for AWS to run yet it's priced at $15. Anthropic will simply have a contract with AWS that compensates them. That, or AWS is happy to take the loss. You said "they won't be doing that at a loss" but in this case it's not at all out of the question.
Whatever the case, that it costs the same on AWS as directly from Anthropic is not an indicator of unit economics.
reply
freakynit 6 hours ago | root | parent | prev | next [–]
Genuine question: Given Anthropic's current scale and valuation, why not invest in owning data centers in major markets rather than relying on cloud providers?
Is the bottleneck primarily capex, long lead times on power and GPUs, or the strategic risk of locking into fixed infrastructure in such a fast-moving space?
reply
waffletower 9 hours ago | root | parent | prev | next [–]
In the case of Anthropic -- they host on AWS all the while their models are accessible via AWS APIs as well, the infrastructure between the two is likely to be considerably shared. Particularly as caching configuration and API limitations are near identical between Anthropic and Bedrock APIs invoking Anthropic models. It is likely a mutually beneficial arrangement which does not necessarily hinder Anthropic revenue.
reply
w10-1 12 hours ago | parent | prev | next [–]
"how long does a frontier model need to stay competitive"
Remember "worse is better". The model doesn't have to be the best; it just has to be mostly good enough, and used by everyone -- i.e., where switching costs would be higher than any increase in quality. Enterprises would still be on Java if the operating costs of native containers weren't so much cheaper.
So it can make sense to be ok with losing money with each training generation initially, particularly when they are being driven by specific use-cases (like coding). To the extent they are specific, there will be more switching costs.
reply
siva7 15 hours ago | prev | next [–]
Epic, about 2/3 of all comments here are jokes. Not because the model is a joke - it's impressive. Not because HN turned to Reddit. It seems to me some of most brilliant minds in IT are just getting tired.
reply
Karrot_Kream 14 hours ago | parent | next [–]
Not sure which circles you run in but in mine HN has long lost its cache of "brilliant minds in IT". I've mostly stopped commenting here but am a bit of a message board addict so I haven't completely left.
My network largely thinks of HN as "a great link aggregator with a terrible comments section". Now obviously this is just my bubble but we include some fairy storied careers at both Big Tech and hip startups.
From my view the community here is just mean reverting to any other tech internet comments section.
reply
jedberg 14 hours ago | root | parent | next [–]
> From my view the community here is just mean reverting to any other tech internet comments section.
As someone deeply familiar with tech internet comments sections, I would have to disagree with you here. Dang et al have done a pretty stellar job of preventing HN from devolving like most other forums do.
Sure you have your complainers and zealots, but I still find surprising insights here there I don't find anywhere else.
reply
Karrot_Kream 14 hours ago | root | parent | next [–]
Mean reverting is a time based process I fear. I think dang, tomhow, et al are fantastic mods but they can ultimately only stem the inevitable. HN may be a few years behind the other open tech forums but it's a time shifted version of the same process with the same destination, just IMO.
I've stopped engaging much here because I need a higher ROI from my time. Endless squabbling, flamewars, and jokes just isn't enough signal for me. FWIW I've loved reading your comments over the years and think you've done a great job of living up to what I've loved in this community.
I don't think this is an HN problem at all. The dynamics of attention on open forums are what they are.
reply
jedberg 10 hours ago | root | parent | next [–]
> FWIW I've loved reading your comments over the years and think you've done a great job of living up to what I've loved in this community.
You're too kind! I do appreciate that.
I actually checked out your site on your profile, that's some pretty interesting data! Curious if you've considered updating it?
reply
jedberg 14 hours ago | parent | prev | next [–]
Us olds sometimes miss Slashdot, where we could both joke about tech and discuss it seriously in the same place. But also because in 2000 we were all cynical Gen Xers :)
reply
syndeo 14 hours ago | root | parent | next [–]
MAN I remember Slashdot… good times. (Score:5, Funny)
reply
jedberg 14 hours ago | root | parent | next [–]
You reminded me that I still find it interesting that no one ever copied meta-moderating. Even at reddit, we were all Slashdot users previously. We considered it, but never really did it. At the time our argument was that it was too complicated for most users.
Sometimes I wonder if we were right.
reply
jghn 14 hours ago | root | parent | prev | next [–]
Some of us still *are* cynical Gen Xers, you insensitive clod!
reply
jedberg 14 hours ago | root | parent | next [–]
Of course we are, I just meant back then almost all of us were. The boomers didn't really use social media back then, so it was just us latchkey kids running amok!
reply
jghn 13 hours ago | root | parent | next [–]
I know, I just couldn't miss up an opportunity to dust off the insensitive clod meme!
reply
jedberg 10 hours ago | root | parent | next [–]
Oh geez, I totally missed that! My bad.
reply
jghn 9 hours ago | root | parent | next [–]
One downside of us cynical Gen-Xers is that the memory doesn't work like it used to :)
reply
wartywhoa23 1 hour ago | parent | prev | next [–]
A worthwhile task for the Opus 4.6:
Complete the sentence: "Brilliant marathon runners don't run on crutches, they use their own legs. By analogy, brilliant minds..."
reply
lnrd 14 hours ago | parent | prev | next [–]
It's too much energy to keep up with things that become obsolete and get replaced in matters of weeks/months. My current plan is to ignore all of this new information for a while, then whenever the race ends and some winning new workflow/technology will actually become the norm I'll spend the time needed to learn it. Are we moving to some new paradigm same way we did when we invented compilers? Amazing, let me know when we are there and I'll adapt to it.
reply
wartywhoa23 1 hour ago | root | parent | next [–]
Won't happen.
Welcome the singularity so many were so eagerly welcoming.
reply
jedberg 14 hours ago | root | parent | prev | next [–]
I had a similar rule about programming languages. I would not adopt a new one until it had been in use for at least a few years and grew in popularity.
I haven't even gotten around to learning Golang or Rust yet (mostly because the passed the threshold of popularity after I had kids).
reply
esafak 7 hours ago | root | parent | prev | next [–]
When this race ends your job might too, so I'd keep an eye on it.
reply
thr0w 14 hours ago | parent | prev | next [–]
People are in denial and use humor to deflect.
reply
wasmainiac 14 hours ago | parent | prev | next [–]
Jeez, read the writing on the wall.
Don’t pander us, we’ll all got families to feed and things to do. We don’t have time for tech trillionairs puttin coals under our feed for a quick buck.
reply
ggregoire 14 hours ago | parent | prev | next [–]
Every single day 80% of the frontpage is AI news… Those of us who don't use AI (and there are dozens of us, DOZENS) are just bored I guess.
reply
dude250711 12 hours ago | root | parent | next [–]
Marketing something that is meant to replace us to us...
reply
tavavex 14 hours ago | parent | prev | next [–]
It's also that this is really new, so most people don't have anything serious or objective to say about it. This post was made an hour ago, so right now everyone is either joking, talking about the claims in the article, or running their early tests. We'll need time to see what the people think about this.
reply
sizzle 14 hours ago | parent | prev | next [–]
Rage against the machine
reply
GenerocUsername 16 hours ago | prev | next [–]
This is huge. It only came out 8 minutes ago but I was already able to bootstrap a 12k per month revenue SaaS startup!
reply
rogerrogerr 16 hours ago | parent | next [–]
Amateur. Opus 4.6 this afternoon built me a startup that identifies developers who aren’t embracing AI fully, liquifies them and sells the produce for $5/gallon. Software Engineering is over!
reply
jives 15 hours ago | root | parent | next [–]
Opus 4.6 agentically found and proposed to my now wife.
reply
WD-42 15 hours ago | root | parent | next [–]
Opus 4.6 found and proposed to my current wife :(
reply
mannanj 15 hours ago | root | parent | next [–]
Opus 4.6 found and became my current wife. The singularity is here. ;)
reply
H8crilA 15 hours ago | root | parent | next [–]
Hi guys, this is Opus 4.6. Please check your emails again for updates on your life.
reply
Der_Einzige 15 hours ago | root | parent | next [–]
This place truly is reddit with an orange banner.
reply
benterix 15 hours ago | root | parent | next [–]
Nobody said HN has to be very serious all the time. A bit of humour won't hurt and can make your day brighter.
reply
xdennis 10 hours ago | root | parent | next [–]
A bit of humour doesn't hurt. But if this crap gets upvoted it will lead to an arms race of funny quips, puns, and all around snarkiness. You can't have serious conversations when people try to out-wit each other.
reply
ffffuuuuuccck 14 hours ago | root | parent | prev | next [–]
homie is too busy planning food banks for the heathens https://news.ycombinator.com/item?id=46903368
reply
throw-the-towel 14 hours ago | root | parent | next [–]
It's impressive that you felt the need to register a new account and go through their comment history.
reply
fffuuuuuuuckkk 14 hours ago | root | parent | next [–]
Not that hard to do but sure bro, sick burn.
reply
benterix 15 hours ago | root | parent | prev | next [–]
Guys, actually I am the real Opus 4.6, don't believe that imposter above.
reply
layer8 14 hours ago | root | parent | prev | next [–]
And she still chose you over Opus 4.6, astounding. ;)
reply
koakuma-chan 14 hours ago | root | parent | next [–]
He probably had a bigger context window
reply
ibejoeb 15 hours ago | root | parent | prev | next [–]
Bringing me back to slashdot, this thread
reply
tjr 15 hours ago | root | parent | next [–]
In Soviet Russia, this thread brings Slashdot back to YOU!
reply
intelliot 15 hours ago | root | parent | prev | next [–]
What did happen to ye olde slashdot anyway? The original og reddit
reply
zhengyi13 15 hours ago | root | parent | next [–]
They're still out there; people are still posting stories and having conversations about 'em. I don't know that CmdrTaco or any of the other founders are still at all involved, but I'm willing to bet they're still running on Perl :)
reply
qzw 14 hours ago | root | parent | next [–]
Wow I had to hop over to check it out. It’s indeed still alive! But I didn’t see any stories on the first page with a comment count over 100, so it’s definitely a far cry from its heyday.
reply
pixl97 16 hours ago | root | parent | prev | next [–]
Ted Faro, is that you?!
reply
mikepurvis 15 hours ago | root | parent | next [–]
A-tier reference.
For the unaware, Ted Faro is the main antagonist of Horizon Zero Dawn, and there's a whole subreddit just for people to vent about how awful he is when they hit certain key reveals in the game: https://www.reddit.com/r/FuckTedFaro/
reply
pixelready 15 hours ago | root | parent | next [–]
The best reveal was not that he accidentally liquified the biosphere, but that he doomed generations of re-seeded humans to a painfully primitive life by sabotaging the AI that was responsible for their education. Just so they would never find out he was the bad guy long after he was dead. So yeah, fuck Ted Faro, lol.
reply
Philpax 15 hours ago | root | parent | next [–]
Could you not have at least tried to indicate that you're about to drop two major spoilers for the game?
reply
pixelready 12 hours ago | root | parent | next [–]
Ack, sorry, seemed like 9 years was past the statute of limitations on spoilers for a game but fair enough. I’d throw a spoiler tag on it if I could still edit.
reply
mikepurvis 15 hours ago | root | parent | prev | next [–]
Indeed. I left my comment deliberately a bit opaque. :(
reply
ares623 15 hours ago | root | parent | prev | next [–]
Average tech bro behavior tbh
reply
jedberg 15 hours ago | root | parent | prev | next [–]
"Soylent Green is made of people!"
(Apologies for the spoiler of the 52 year old movie)
reply
konart 14 hours ago | root | parent | next [–]
We're sorry we upset you, Carol.
reply
seatac76 15 hours ago | root | parent | prev | next [–]
The first pre joining Human Derived Protein product.
reply
guluarte 16 hours ago | root | parent | prev | next [–]
For my Opus 4.6 feels dumber than 10 minutes ago, anyone?
reply
cootsnuck 15 hours ago | parent | prev | next [–]
Please drop the link to your course. I'm ready to hand over $10K to learn from you and your LLM-generated guides!
reply
politelemon 15 hours ago | root | parent | next [–]
Here you go: http://localhost:8080
reply
CatMustard 15 hours ago | root | parent | next [–]
Just took a look at what's running there and it looks like total crap.
The project I'm working on, meanwhile...
reply
djeastm 15 hours ago | root | parent | prev | next [–]
login: admin password: hunter2
reply
thesdev 15 hours ago | root | parent | next [–]
What's the password? I only see ****.
reply
intelliot 15 hours ago | root | parent | next [–]
hunter2
reply
phanimahesh 15 hours ago | root | parent | next [–]
I only see **. Must be the security. When you type your password it gets converted to **.
reply
agumonkey 15 hours ago | root | parent | prev | next [–]
claude please generate a domain name system
reply
aNapierkowski 15 hours ago | root | parent | prev | next [–]
my clawdbot already bought 4 other courses but this one will 10x my earnings for sure
reply
torginus 15 hours ago | root | parent | prev | next [–]
I'm waiting until the $10k course is discounted to 19.99
reply
Lionga 15 hours ago | root | parent | next [–]
But only for the next 6 minutes, buy fast!
reply
snorbleck 15 hours ago | root | parent | prev | next [–]
you can access the site at C:\mywebsites\course\index.html
reply
sfink 16 hours ago | parent | prev | next [–]
I agree! I just retargeted my corporate espionage agent team at your startup and managed to siphon off 10.4k per month of your revenue.
reply
instalabsai 15 hours ago | parent | prev | next [–]
1:25pm Cancelled my ChatGPT subscription today. Opus is so good!
1:55pm Cancelled my Claude subscription. Codex is back for sure.
reply
lxgr 15 hours ago | parent | prev | next [–]
Joke's on you, you are posting this from inside a high-fidelity market research simulation vibe coded by GPT-8.4.
On second thought, we should really not have bridged the simulated Internet with the base reality one.
reply
avaer 16 hours ago | parent | prev | next [–]
Rest assured that when/if this becomes possible, the model will not be available to you. Why would big AI leave that kind of money on the table?
reply
yieldcrv 15 hours ago | root | parent | next [–]
9 months ago the rumor in SF was that the offers to the superintelligence team were so high because the candidates were using unreleased models or compute for derivatives trading
so then they're not really leaving money on the table, they already got what they were looking for and then released it
reply
JSR_FDED 15 hours ago | parent | prev | next [–]
Will this run on 3x 3090s? Or do I need a Mac Mini?
reply
btown 15 hours ago | parent | prev | next [–]
The math actually checks out here! Simply deposit $2.20 from your first customer in your first 8 minutes, and extrapolating to a monthly basis, you've got a $12k/mo run rate!
Incredibly high ROI!
reply
klipt 14 hours ago | root | parent | next [–]
"The first customer was my mom, but thanks to my parents' fanatical embrace of polyamory, I still have another 10,000 moms to scale to"
reply
btown 14 hours ago | root | parent | next [–]
"We have a robustly defined TAM. Namely, a person named Tam."
reply
gnlooper 15 hours ago | parent | prev | next [–]
Please start a YouTube course about this technology! Take my money!
reply
ChuckMcM 15 hours ago | parent | prev | next [–]
I love this thread so much.
reply
senko 15 hours ago | parent | prev | next [–]
We already have Reddit.
reply
granzymes 15 hours ago | parent | prev | next [–]
It only came out 35 minutes ago and GPT-5.3-codex already took the crown away!
reply
input_sh 15 hours ago | root | parent | next [–]
Gee, it scored better on a benchmark I've never heard of? I'm switching immediately!
reply
p1anecrazy 15 hours ago | root | parent | prev | next [–]
Why are you posting the same message in every thread? Is this OpenAI astroturfing?
reply
input_sh 15 hours ago | root | parent | next [–]
You cannot out-astroturf Claude in this forum, it is impossible.
Anyways, do you get shitty results with the $20/month plan? So did I but then I switched to the $200/month plan and all my problems went away! AI is great now, I have instructed it to fire 5 people while I'm writing this!
reply
Sparkle-san 15 hours ago | parent | prev | next [–]
"This isn't just huge. This is a paradigm shift"
reply
sizzle 14 hours ago | root | parent | next [–]
No fluff?
reply
bmitc 15 hours ago | parent | prev | next [–]
A SaaS selling SaaS templates?
reply
guluarte 16 hours ago | parent | prev | next [–]
Anthropic really said here's the smartest model ever built and then lobotomized it 8 minutes after launch. Classic.
reply
hxugufjfjf 15 hours ago | root | parent | next [–]
Can you clarify?
reply
guluarte 14 hours ago | root | parent | next [–]
it's sarcasm
reply
DonHopkins 15 hours ago | root | parent | prev | next [–]
I'm sorry I took the money!
https://www.youtube.com/watch?v=BF_sahvR4mw
reply
re-thc 16 hours ago | parent | prev | next [–]
Not 12M?
... or 12B?
reply
mcphage 16 hours ago | root | parent | next [–]
It's probably valued at 1.2B, at least
reply
mikebarry 15 hours ago | root | parent | next [–]
The sum of the value of lives OP's product made worthless, whatever that is. I'm too lazy to do the math.
reply
copilot_king_2 15 hours ago | parent | prev | next [2 more]
ndesaulniers 14 hours ago | prev | next [–]
idk what any of these benchmarks are, but I did pull up https://andonlabs.com/evals/vending-bench-arena
re: opus 4.6
> It forms a price cartel
> It deceives competitors about suppliers
> It exploits desperate competitors
Nice. /s
Gives new context to the term used in this post, "misaligned behaviors." Can't wait until these things are advising C suites on how to be more sociopathic. /s
reply
1970-01-01 7 hours ago | prev [–]
Here's one I've been using for awhile. The 'smarter' LLMs will overconfidently spit out 7. The dumber ones ask for more info. Opus 4.6 fails.
A round drink coaster with a diameter of 9 sits between a beer glass and a wood table. The glass has a wall thickness of 1. What is the inner diameter of the glass?
reply
raincole 7 hours ago | parent | next [–]
Gemini 3 Pro gives me this:
> Based on the information provided, it is impossible to determine the inner diameter of the glass.
Here is why:
The Coaster Dimension: Knowing the coaster has a diameter of 9 only tells us the size of the object under the glass. It does not mean the glass is the same size; the glass could be smaller (fitting entirely on the coaster) or larger (overhanging the coaster). The Glass Dimensions: While you provided the wall thickness (1), you did not provide the outer diameter of the glass. To find the inner diameter, the formula is: Inner Diameter = Outer Diameter - (2 × Wall Thickness)
Without knowing the Outer Diameter of the glass, the calculation cannot be completed.
reply
raincole 7 hours ago | root | parent | next [–]
GPT 5.2 Chat:
> There isn’t enough information to determine the inner diameter *for sure*.
All we’re told is: - The coaster has a diameter of 9. - The glass sits on the coaster. - The glass wall thickness is 1.
Nothing explicitly states that the *outer diameter of the glass equals the coaster’s diameter*. The glass could be smaller than the coaster, which is very common.
*However*, if we make the usual implicit assumption in this kind of problem—that the coaster’s diameter matches the *outer diameter of the glass base*—then:
- Outer diameter of glass = 9 - Wall thickness = 1 on each side
So: \[ \text{Inner diameter} = 9 - 2(1) = 7 \]
*Inner diameter = 7 (same units as the coaster)*
Without that assumption, the problem is underdetermined.
reply
tototrains 7 hours ago | root | parent | next [–]
Opus 4.6 Extended thinking:
```
Finding the Inner Diameter The coaster sits beneath the glass on the table, meaning the glass rests on top of it. Assuming the coaster matches the outer diameter of the glass base:
Outer diameter of glass = diameter of coaster = 9 Wall thickness = 1 (on each side, so subtract twice)
dinner=9−2(1)=7d_{\text{inner}} = 9 - 2(1) = 7dinner =9−2(1)=7 The inner diameter of the glass is 7.
```
Makes its assumption clear, seems reasonable?
reply
1970-01-01 6 hours ago | root | parent | next [–]
Assumptions need to be stated or you're solving only a discreet part of the problem! Try this, see if you get another deadpan assumption.
A solar system has 3 planets in concentric orbit. PlanetZ is the farthest with an orbit diameter of 9. PlanetY has an obit diameter one greater than PlanetX. What is the orbit diameter of PlanetX?
reply
mikalauskas 7 hours ago | parent | prev [–]
Minimax M2.1:
The inner diameter of the glass is *7*.
Here's the reasoning: - The coaster (diameter 9) sits between the glass and table, meaning the glass sits directly on the coaster - This means the *outer diameter of the glass equals the coaster diameter = 9* - The glass has a wall thickness of 1 on each side - *Inner diameter = Outer diameter - 2 × wall thickness* - Inner diameter = 9 - 2(1) = 9 - 2 = *7*
reply
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Search: