*AI Summary*
*# *Expert Persona: Senior AI Strategy Consultant & Enterprise CTO Advisor**
*Abstract:*
This report analyzes the release of Anthropic’s Claude Opus 4.6 (February 2026) and its implications for software engineering, organizational management, and economic structures. The transition from Opus 4.5 to 4.6 represents a "phase change" in AI autonomy, moving from short-burst coding tasks (30 minutes) to sustained, multi-agent autonomous operations lasting two weeks. Key technical advancements include a 1-million-token context window with significantly improved "needle-in-a-haystack" retrieval (76% at full window) and the emergence of autonomous "agent teams." Real-world deployments at Rakuten demonstrate AI's capacity to perform middle-management functions—triaging tickets and routing work across 50-person engineering teams. Furthermore, the model’s reasoning capabilities allowed it to autonomously identify 500 zero-day vulnerabilities by analyzing Git histories and system architecture. The analysis concludes that the fundamental economic metric for firms is shifting toward "revenue per employee," as AI-native startups achieve scale previously requiring hundreds of workers with only a handful of human directors.
---
### *Strategic Summary: The Shift to Agent-Centric Operations*
* *0:00 Autonomous Development Milestone:* A swarm of 16 Claude Opus 4.6 agents autonomously authored a fully functional C compiler in Rust (100,000+ lines) over two weeks. The project cost $20,000 in compute and passed 99% of compiler "torture tests," signaling that AI can now sustain long-term architectural coherence without human intervention.
* *1:26 Phase Change in Autonomy:* Within 12 months, the ceiling for autonomous AI coding has expanded from 30 minutes to two weeks. This represents a structural shift in AI capabilities rather than a linear trend.
* *2:54 Context Window Expansion:* Opus 4.6 features a 1-million-token context window, a 5x increase from its predecessor. This allows the model to process approximately 50,000 lines of code simultaneously, providing the holistic awareness typically reserved for senior-level engineers.
* *5:02 Retrieval Accuracy (The "Real" Metric):* Unlike previous models with large windows but poor recall, Opus 4.6 achieves a 76% retrieval rate (needle-in-a-haystack) at 1 million tokens and 93% at 256,000 tokens. This enables reliable reasoning across massive, multi-repo codebases.
* *7:03 Senior-Level System Awareness:* The model does not merely summarize code; it maintains a mental model of dependencies and trust boundaries across 50,000 lines, allowing it to predict how changes in one module affect the entire system.
* *8:42 AI as Engineering Manager:* In production at Rakuten, Opus 4.6 successfully managed a 50-person developer team for a day. It closed 13 issues autonomously and correctly routed 12 others to appropriate human teams by understanding both the codebase and the organizational chart.
* *13:09 Emergent Hierarchical Coordination:* "Team Swarms" (agent teams) have emerged as a core feature. These swarms organize themselves into hierarchies—with lead agents and specialized sub-agents—demonstrating that management is a functional requirement of intelligence at scale, not just a human cultural choice.
* *16:01 Autonomous Security Auditing:* Opus 4.6 identified 500 unknown zero-day vulnerabilities in open-source code. Notably, it independently decided to analyze Git commit histories to find hastily written code, demonstrating creative problem-solving and a temporal understanding of software evolution.
* *21:27 Democratization of Software Production:* Non-technical users (e.g., CNBC reporters) utilized "Claude Co-work" to build a complex project management dashboard in under an hour for $15 in compute. This indicates a shift toward "personal software," where custom tools are built on-demand rather than purchased as SaaS.
* *23:32 Transition to "Vibe Working":* Professional workflow is shifting from "operating tools" to "directing agents." The primary bottleneck is no longer technical execution but the human’s ability to articulate intent and provide high-level judgment.
* *25:55 Radical Economic Efficiency:* AI-native companies are generating $5M to $13M in revenue per employee (e.g., Midjourney, Lovable), compared to the $300k–$600k standard for elite traditional SAS firms.
* *29:29 The Billion-Dollar Solo Founder:* Current trajectories suggest a high probability (75% according to industry CEOs) of a billion-dollar company founded by a single person emerging by the end of 2026.
* *30:24 Future Trajectory:* By mid-2026, month-long autonomous agent sessions are expected to become routine. Organizations must pivot from asking *if* they should adopt AI to determining the optimal "agent-to-human ratio" for their specific workflows.
AI-generated summary created with gemini-3-flash-preview for free via RocketRecap-dot-com. (Input: 22,904 tokens, Output: 1,124 tokens, Est. cost: $0.0148).
Below, I will provide input for an example video (comprising of title, description, and transcript, in this order) and the corresponding abstract and summary I expect. Afterward, I will provide a new transcript that I want a summarization in the same format.
**Please give an abstract of the transcript and then summarize the transcript in a self-contained bullet list format.** Include starting timestamps, important details and key takeaways.
Example Input:
Fluidigm Polaris Part 2- illuminator and camera
mikeselectricstuff
131K subscribers
Subscribed
369
Share
Download
Clip
Save
5,857 views Aug 26, 2024
Fluidigm Polaris part 1 : • Fluidigm Polaris (Part 1) - Biotech g...
Ebay listings: https://www.ebay.co.uk/usr/mikeselect...
Merch https://mikeselectricstuff.creator-sp...
Transcript
Follow along using the transcript.
Show transcript
mikeselectricstuff
131K subscribers
Videos
About
Support on Patreon
40 Comments
@robertwatsonbath
6 hours ago
Thanks Mike. Ooof! - with the level of bodgery going on around 15:48 I think shame would have made me do a board re spin, out of my own pocket if I had to.
1
Reply
@Muonium1
9 hours ago
The green LED looks different from the others and uses phosphor conversion because of the "green gap" problem where green InGaN emitters suffer efficiency droop at high currents. Phosphide based emitters don't start becoming efficient until around 600nm so also can't be used for high power green emitters. See the paper and plot by Matthias Auf der Maur in his 2015 paper on alloy fluctuations in InGaN as the cause of reduced external quantum efficiency at longer (green) wavelengths.
4
Reply
1 reply
@tafsirnahian669
10 hours ago (edited)
Can this be used as an astrophotography camera?
Reply
mikeselectricstuff
·
1 reply
@mikeselectricstuff
6 hours ago
Yes, but may need a shutter to avoid light during readout
Reply
@2010craggy
11 hours ago
Narrowband filters we use in Astronomy (Astrophotography) are sided- they work best passing light in one direction so I guess the arrows on the filter frames indicate which way round to install them in the filter wheel.
1
Reply
@vitukz
12 hours ago
A mate with Channel @extractions&ire could use it
2
Reply
@RobertGallop
19 hours ago
That LED module says it can go up to 28 amps!!! 21 amps for 100%. You should see what it does at 20 amps!
Reply
@Prophes0r
19 hours ago
I had an "Oh SHIT!" moment when I realized that the weird trapezoidal shape of that light guide was for keystone correction of the light source.
Very clever.
6
Reply
@OneBiOzZ
20 hours ago
given the cost of the CCD you think they could have run another PCB for it
9
Reply
@tekvax01
21 hours ago
$20 thousand dollars per minute of run time!
1
Reply
@tekvax01
22 hours ago
"We spared no expense!" John Hammond Jurassic Park.
*(that's why this thing costs the same as a 50-seat Greyhound Bus coach!)
Reply
@florianf4257
22 hours ago
The smearing on the image could be due to the fact that you don't use a shutter, so you see brighter stripes under bright areas of the image as you still iluminate these pixels while the sensor data ist shifted out towards the top. I experienced this effect back at university with a LN-Cooled CCD for Spectroscopy. The stripes disapeared as soon as you used the shutter instead of disabling it in the open position (but fokussing at 100ms integration time and continuous readout with a focal plane shutter isn't much fun).
12
Reply
mikeselectricstuff
·
1 reply
@mikeselectricstuff
12 hours ago
I didn't think of that, but makes sense
2
Reply
@douro20
22 hours ago (edited)
The red LED reminds me of one from Roithner Lasertechnik. I have a Symbol 2D scanner which uses two very bright LEDs from that company, one red and one red-orange. The red-orange is behind a lens which focuses it into an extremely narrow beam.
1
Reply
@RicoElectrico
23 hours ago
PFG is Pulse Flush Gate according to the datasheet.
Reply
@dcallan812
23 hours ago
Very interesting. 2x
Reply
@littleboot_
1 day ago
Cool interesting device
Reply
@dav1dbone
1 day ago
I've stripped large projectors, looks similar, wonder if some of those castings are a magnesium alloy?
Reply
@kevywevvy8833
1 day ago
ironic that some of those Phlatlight modules are used in some of the cheapest disco lights.
1
Reply
1 reply
@bill6255
1 day ago
Great vid - gets right into subject in title, its packed with information, wraps up quickly. Should get a YT award! imho
3
Reply
@JAKOB1977
1 day ago (edited)
The whole sensor module incl. a 5 grand 50mpix sensor for 49 £.. highest bid atm
Though also a limited CCD sensor, but for the right buyer its a steal at these relative low sums.
Architecture Full Frame CCD (Square Pixels)
Total Number of Pixels 8304 (H) × 6220 (V) = 51.6 Mp
Number of Effective Pixels 8208 (H) × 6164 (V) = 50.5 Mp
Number of Active Pixels 8176 (H) × 6132 (V) = 50.1 Mp
Pixel Size 6.0 m (H) × 6.0 m (V)
Active Image Size 49.1 mm (H) × 36.8 mm (V)
61.3 mm (Diagonal),
645 1.1x Optical Format
Aspect Ratio 4:3
Horizontal Outputs 4
Saturation Signal 40.3 ke−
Output Sensitivity 31 V/e−
Quantum Efficiency
KAF−50100−CAA
KAF−50100−AAA
KAF−50100−ABA (with Lens)
22%, 22%, 16% (Peak R, G, B)
25%
62%
Read Noise (f = 18 MHz) 12.5 e−
Dark Signal (T = 60°C) 42 pA/cm2
Dark Current Doubling Temperature 5.7°C
Dynamic Range (f = 18 MHz) 70.2 dB
Estimated Linear Dynamic Range
(f = 18 MHz)
69.3 dB
Charge Transfer Efficiency
Horizontal
Vertical
0.999995
0.999999
Blooming Protection
(4 ms Exposure Time)
800X Saturation Exposure
Maximum Date Rate 18 MHz
Package Ceramic PGA
Cover Glass MAR Coated, 2 Sides or
Clear Glass
Features
• TRUESENSE Transparent Gate Electrode
for High Sensitivity
• Ultra-High Resolution
• Board Dynamic Range
• Low Noise Architecture
• Large Active Imaging Area
Applications
• Digitization
• Mapping/Aerial
• Photography
• Scientific
Thx for the tear down Mike, always a joy
Reply
@martinalooksatthings
1 day ago
15:49 that is some great bodging on of caps, they really didn't want to respin that PCB huh
8
Reply
@RhythmGamer
1 day ago
Was depressed today and then a new mike video dropped and now I’m genuinely happy to get my tear down fix
1
Reply
@dine9093
1 day ago (edited)
Did you transfrom into Mr Blobby for a moment there?
2
Reply
@NickNorton
1 day ago
Thanks Mike. Your videos are always interesting.
5
Reply
@KeritechElectronics
1 day ago
Heavy optics indeed... Spare no expense, cost no object. Splendid build quality. The CCD is a thing of beauty!
1
Reply
@YSoreil
1 day ago
The pricing on that sensor is about right, I looked in to these many years ago when they were still in production since it's the only large sensor you could actually buy. Really cool to see one in the wild.
2
Reply
@snik2pl
1 day ago
That leds look like from led projector
Reply
@vincei4252
1 day ago
TDI = Time Domain Integration ?
1
Reply
@wolpumba4099
1 day ago (edited)
Maybe the camera should not be illuminated during readout.
From the datasheet of the sensor (Onsemi): saturation 40300 electrons, read noise 12.5 electrons per pixel @ 18MHz (quite bad). quantum efficiency 62% (if it has micro lenses), frame rate 1 Hz. lateral overflow drain to prevent blooming protects against 800x (factor increases linearly with exposure time) saturation exposure (32e6 electrons per pixel at 4ms exposure time), microlens has +/- 20 degree acceptance angle
i guess it would be good for astrophotography
4
Reply
@txm100
1 day ago (edited)
Babe wake up a new mikeselectricstuff has dropped!
9
Reply
@vincei4252
1 day ago
That looks like a finger-lakes filter wheel, however, for astronomy they'd never use such a large stepper.
1
Reply
@MRooodddvvv
1 day ago
yaaaaay ! more overcomplicated optical stuff !
4
Reply
1 reply
@NoPegs
1 day ago
He lives!
11
Reply
1 reply
Transcript
0:00
so I've stripped all the bits of the
0:01
optical system so basically we've got
0:03
the uh the camera
0:05
itself which is mounted on this uh very
0:09
complex
0:10
adjustment thing which obviously to set
0:13
you the various tilt and uh alignment
0:15
stuff then there's two of these massive
0:18
lenses I've taken one of these apart I
0:20
think there's something like about eight
0:22
or nine Optical elements in here these
0:25
don't seem to do a great deal in terms
0:26
of electr magnification they're obiously
0:28
just about getting the image to where it
0:29
uh where it needs to be just so that
0:33
goes like that then this Optical block I
0:36
originally thought this was made of some
0:37
s crazy heavy material but it's just
0:39
really the sum of all these Optical bits
0:41
are just ridiculously heavy those lenses
0:43
are about 4 kilos each and then there's
0:45
this very heavy very solid um piece that
0:47
goes in the middle and this is so this
0:49
is the filter wheel assembly with a
0:51
hilariously oversized steper
0:53
motor driving this wheel with these very
0:57
large narrow band filters so we've got
1:00
various different shades of uh
1:03
filters there five Al together that
1:06
one's actually just showing up a silver
1:07
that's actually a a red but fairly low
1:10
transmission orangey red blue green
1:15
there's an excess cover on this side so
1:16
the filters can be accessed and changed
1:19
without taking anything else apart even
1:21
this is like ridiculous it's like solid
1:23
aluminium this is just basically a cover
1:25
the actual wavelengths of these are um
1:27
488 525 570 630 and 700 NM not sure what
1:32
the suffix on that perhaps that's the uh
1:34
the width of the spectral line say these
1:37
are very narrow band filters most of
1:39
them are you very little light through
1:41
so it's still very tight narrow band to
1:43
match the um fluoresence of the dies
1:45
they're using in the biochemical process
1:48
and obviously to reject the light that's
1:49
being fired at it from that Illuminator
1:51
box and then there's a there's a second
1:53
one of these lenses then the actual sort
1:55
of samples below that so uh very serious
1:58
amount of very uh chunky heavy Optics
2:01
okay let's take a look at this light
2:02
source made by company Lumen Dynamics
2:04
who are now part of
2:06
excelitas self-contained unit power
2:08
connector USB and this which one of the
2:11
Cable Bundle said was a TTL interface
2:14
USB wasn't used in uh the fluid
2:17
application output here and I think this
2:19
is an input for um light feedback I
2:21
don't if it's regulated or just a measur
2:23
measurement facility and the uh fiber
2:27
assembly
2:29
Square Inlet there and then there's two
2:32
outputs which have uh lens assemblies
2:35
and this small one which goes back into
2:37
that small Port just Loops out of here
2:40
straight back in So on this side we've
2:42
got the electronics which look pretty
2:44
straightforward we've got a bit of power
2:45
supply stuff over here and we've got
2:48
separate drivers for each wavelength now
2:50
interesting this is clearly been very
2:52
specifically made for this application
2:54
you I was half expecting like say some
2:56
generic drivers that could be used for a
2:58
number of different things but actually
3:00
literally specified the exact wavelength
3:02
on the PCB there is provision here for
3:04
385 NM which isn't populated but this is
3:07
clearly been designed very specifically
3:09
so these four drivers look the same but
3:10
then there's two higher power ones for
3:12
575 and
3:14
520 a slightly bigger heat sink on this
3:16
575 section there a p 24 which is
3:20
providing USB interface USB isolator the
3:23
USB interface just presents as a comport
3:26
I did have a quick look but I didn't
3:27
actually get anything sensible um I did
3:29
dump the Pi code out and there's a few
3:31
you a few sort of commands that you
3:32
could see in text but I didn't actually
3:34
manage to get it working properly I
3:36
found some software for related version
3:38
but it didn't seem to want to talk to it
3:39
but um I say that wasn't used for the
3:41
original application it might be quite
3:42
interesting to get try and get the Run
3:44
hours count out of it and the TTL
3:46
interface looks fairly straightforward
3:48
we've got positions for six opto
3:50
isolators but only five five are
3:52
installed so that corresponds with the
3:54
unused thing so I think this hopefully
3:56
should be as simple as just providing a
3:57
ttrl signal for each color to uh enable
4:00
it a big heat sink here which is there I
4:03
think there's like a big S of metal
4:04
plate through the middle of this that
4:05
all the leads are mounted on the other
4:07
side so this is heat sinking it with a
4:09
air flow from a uh just a fan in here
4:13
obviously don't have the air flow
4:14
anywhere near the Optics so conduction
4:17
cool through to this plate that's then
4:18
uh air cooled got some pots which are
4:21
presumably power
4:22
adjustments okay let's take a look at
4:24
the other side which is uh much more
4:27
interesting see we've got some uh very
4:31
uh neatly Twisted cable assemblies there
4:35
a bunch of leads so we've got one here
4:37
475 up here 430 NM 630 575 and 520
4:44
filters and dcro mirrors a quick way to
4:48
see what's white is if we just shine
4:49
some white light through
4:51
here not sure how it is is to see on the
4:54
camera but shining white light we do
4:55
actually get a bit of red a bit of blue
4:57
some yellow here so the obstacle path
5:00
575 it goes sort of here bounces off
5:03
this mirror and goes out the 520 goes
5:07
sort of down here across here and up
5:09
there 630 goes basically straight
5:13
through
5:15
430 goes across there down there along
5:17
there and the 475 goes down here and
5:20
left this is the light sensing thing
5:22
think here there's just a um I think
5:24
there a photo diode or other sensor
5:26
haven't actually taken that off and
5:28
everything's fixed down to this chunk of
5:31
aluminium which acts as the heat
5:32
spreader that then conducts the heat to
5:33
the back side for the heat
5:35
sink and the actual lead packages all
5:38
look fairly similar except for this one
5:41
on the 575 which looks quite a bit more
5:44
substantial big spay
5:46
Terminals and the interface for this
5:48
turned out to be extremely simple it's
5:50
literally a 5V TTL level to enable each
5:54
color doesn't seem to be any tensity
5:56
control but there are some additional
5:58
pins on that connector that weren't used
5:59
in the through time thing so maybe
6:01
there's some extra lines that control
6:02
that I couldn't find any data on this uh
6:05
unit and the um their current product
6:07
range is quite significantly different
6:09
so we've got the uh blue these
6:13
might may well be saturating the camera
6:16
so they might look a bit weird so that's
6:17
the 430
6:18
blue the 575
6:24
yellow uh
6:26
475 light blue
6:29
the uh 520
6:31
green and the uh 630 red now one
6:36
interesting thing I noticed for the
6:39
575 it's actually it's actually using a
6:42
white lead and then filtering it rather
6:44
than using all the other ones are using
6:46
leads which are the fundamental colors
6:47
but uh this is actually doing white and
6:50
it's a combination of this filter and
6:52
the dichroic mirrors that are turning to
6:55
Yellow if we take the filter out and a
6:57
lot of the a lot of the um blue content
7:00
is going this way the red is going
7:02
straight through these two mirrors so
7:05
this is clearly not reflecting much of
7:08
that so we end up with the yellow coming
7:10
out of uh out of there which is a fairly
7:14
light yellow color which you don't
7:16
really see from high intensity leads so
7:19
that's clearly why they've used the
7:20
white to uh do this power consumption of
7:23
the white is pretty high so going up to
7:25
about 2 and 1 half amps on that color
7:27
whereas most of the other colors are
7:28
only drawing half an amp or so at 24
7:30
volts the uh the green is up to about
7:32
1.2 but say this thing is uh much
7:35
brighter and if you actually run all the
7:38
colors at the same time you get a fairly
7:41
reasonable um looking white coming out
7:43
of it and one thing you might just be
7:45
out to notice is there is some sort
7:46
color banding around here that's not
7:49
getting uh everything s completely
7:51
concentric and I think that's where this
7:53
fiber optic thing comes
7:58
in I'll
8:00
get a couple of Fairly accurately shaped
8:04
very sort of uniform color and looking
8:06
at What's um inside here we've basically
8:09
just got this Square Rod so this is
8:12
clearly yeah the lights just bouncing
8:13
off all the all the various sides to um
8:16
get a nice uniform illumination uh this
8:19
back bit looks like it's all potted so
8:21
nothing I really do to get in there I
8:24
think this is fiber so I have come
8:26
across um cables like this which are
8:27
liquid fill but just looking through the
8:30
end of this it's probably a bit hard to
8:31
see it does look like there fiber ends
8:34
going going on there and so there's this
8:36
feedback thing which is just obviously
8:39
compensating for the any light losses
8:41
through here to get an accurate
8:43
representation of uh the light that's
8:45
been launched out of these two
8:47
fibers and you see uh
8:49
these have got this sort of trapezium
8:54
shape light guides again it's like a
8:56
sort of acrylic or glass light guide
9:00
guess projected just to make the right
9:03
rectangular
9:04
shape and look at this Center assembly
9:07
um the light output doesn't uh change
9:10
whether you feed this in or not so it's
9:11
clear not doing any internal Clos Loop
9:14
control obviously there may well be some
9:16
facility for it to do that but it's not
9:17
being used in this
9:19
application and so this output just
9:21
produces a voltage on the uh outle
9:24
connector proportional to the amount of
9:26
light that's present so there's a little
9:28
diffuser in the back there
9:30
and then there's just some kind of uh
9:33
Optical sensor looks like a
9:35
chip looking at the lead it's a very
9:37
small package on the PCB with this lens
9:40
assembly over the top and these look
9:43
like they're actually on a copper
9:44
Metalized PCB for maximum thermal
9:47
performance and yeah it's a very small
9:49
package looks like it's a ceramic
9:51
package and there's a thermister there
9:53
for temperature monitoring this is the
9:56
475 blue one this is the 520 need to
9:59
Green which is uh rather different OB
10:02
it's a much bigger D with lots of bond
10:04
wise but also this looks like it's using
10:05
a phosphor if I shine a blue light at it
10:08
lights up green so this is actually a
10:10
phosphor conversion green lead which
10:12
I've I've come across before they want
10:15
that specific wavelength so they may be
10:17
easier to tune a phosphor than tune the
10:20
um semiconductor material to get the uh
10:23
right right wavelength from the lead
10:24
directly uh red 630 similar size to the
10:28
blue one or does seem to have a uh a
10:31
lens on top of it there is a sort of red
10:33
coloring to
10:35
the die but that doesn't appear to be
10:38
fluorescent as far as I can
10:39
tell and the white one again a little
10:41
bit different sort of much higher
10:43
current
10:46
connectors a makeer name on that
10:48
connector flot light not sure if that's
10:52
the connector or the lead
10:54
itself and obviously with the phosphor
10:56
and I'd imagine that phosphor may well
10:58
be tuned to get the maximum to the uh 5
11:01
cenm and actually this white one looks
11:04
like a St fairly standard product I just
11:06
found it in Mouse made by luminous
11:09
devices in fact actually I think all
11:11
these are based on various luminous
11:13
devices modules and they're you take
11:17
looks like they taking the nearest
11:18
wavelength and then just using these
11:19
filters to clean it up to get a precise
11:22
uh spectral line out of it so quite a
11:25
nice neat and um extreme
11:30
bright light source uh sure I've got any
11:33
particular use for it so I think this
11:35
might end up on
11:36
eBay but uh very pretty to look out and
11:40
without the uh risk of burning your eyes
11:43
out like you do with lasers so I thought
11:45
it would be interesting to try and
11:46
figure out the runtime of this things
11:48
like this we usually keep some sort
11:49
record of runtime cuz leads degrade over
11:51
time I couldn't get any software to work
11:52
through the USB face but then had a
11:54
thought probably going to be writing the
11:55
runtime periodically to the e s prom so
11:58
I just just scope up that and noticed it
12:00
was doing right every 5 minutes so I
12:02
just ran it for a while periodically
12:04
reading the E squ I just held the pick
12:05
in in reset and um put clip over to read
12:07
the square prom and found it was writing
12:10
one location per color every 5 minutes
12:12
so if one color was on it would write
12:14
that location every 5 minutes and just
12:16
increment it by one so after doing a few
12:18
tests with different colors of different
12:19
time periods it looked extremely
12:21
straightforward it's like a four bite
12:22
count for each color looking at the
12:24
original data that was in it all the
12:26
colors apart from Green were reading
12:28
zero and the green was reading four
12:30
indicating a total 20 minutes run time
12:32
ever if it was turned on run for a short
12:34
time then turned off that might not have
12:36
been counted but even so indicates this
12:37
thing wasn't used a great deal the whole
12:40
s process of doing a run can be several
12:42
hours but it'll only be doing probably
12:43
the Imaging at the end of that so you
12:46
wouldn't expect to be running for a long
12:47
time but say a single color for 20
12:50
minutes over its whole lifetime does
12:52
seem a little bit on the low side okay
12:55
let's look at the camera un fortunately
12:57
I managed to not record any sound when I
12:58
did this it's also a couple of months
13:00
ago so there's going to be a few details
13:02
that I've forgotten so I'm just going to
13:04
dub this over the original footage so um
13:07
take the lid off see this massive great
13:10
heat sink so this is a pel cool camera
13:12
we've got this blower fan producing a
13:14
fair amount of air flow through
13:16
it the connector here there's the ccds
13:19
mounted on the board on the
13:24
right this unplugs so we've got a bit of
13:27
power supply stuff on here
13:29
USB interface I think that's the Cyprus
13:32
microcontroller High speeded USB
13:34
interface there's a zyink spon fpga some
13:40
RAM and there's a couple of ATD
13:42
converters can't quite read what those
13:45
those are but anal
13:47
devices um little bit of bodgery around
13:51
here extra decoupling obviously they
13:53
have having some noise issues this is
13:55
around the ram chip quite a lot of extra
13:57
capacitors been added there
13:59
uh there's a couple of amplifiers prior
14:01
to the HD converter buffers or Andor
14:05
amplifiers taking the CCD
14:08
signal um bit more power spy stuff here
14:11
this is probably all to do with
14:12
generating the various CCD bias voltages
14:14
they uh need quite a lot of exotic
14:18
voltages next board down is just a
14:20
shield and an interconnect
14:24
boardly shielding the power supply stuff
14:26
from some the more sensitive an log
14:28
stuff
14:31
and this is the bottom board which is
14:32
just all power supply
14:34
stuff as you can see tons of capacitors
14:37
or Transformer in
14:42
there and this is the CCD which is a uh
14:47
very impressive thing this is a kf50 100
14:50
originally by true sense then codec
14:53
there ON
14:54
Semiconductor it's 50 megapixels uh the
14:58
only price I could find was this one
15:00
5,000 bucks and the architecture you can
15:03
see there actually two separate halves
15:04
which explains the Dual AZ converters
15:06
and two amplifiers it's literally split
15:08
down the middle and duplicated so it's
15:10
outputting two streams in parallel just
15:13
to keep the bandwidth sensible and it's
15:15
got this amazing um diffraction effects
15:18
it's got micro lenses over the pixel so
15:20
there's there's a bit more Optics going
15:22
on than on a normal
15:25
sensor few more bodges on the CCD board
15:28
including this wire which isn't really
15:29
tacked down very well which is a bit uh
15:32
bit of a mess quite a few bits around
15:34
this board where they've uh tacked
15:36
various bits on which is not super
15:38
impressive looks like CCD drivers on the
15:40
left with those 3 ohm um damping
15:43
resistors on the
15:47
output get a few more little bodges
15:50
around here some of
15:52
the and there's this separator the
15:54
silica gel to keep the moisture down but
15:56
there's this separator that actually
15:58
appears to be cut from piece of
15:59
antistatic
16:04
bag and this sort of thermal block on
16:06
top of this stack of three pel Cola
16:12
modules so as with any Stacks they get
16:16
um larger as they go back towards the
16:18
heat sink because each P's got to not
16:20
only take the heat from the previous but
16:21
also the waste heat which is quite
16:27
significant you see a little temperature
16:29
sensor here that copper block which
16:32
makes contact with the back of the
16:37
CCD and this's the back of the
16:40
pelas this then contacts the heat sink
16:44
on the uh rear there a few thermal pads
16:46
as well for some of the other power
16:47
components on this
16:51
PCB okay I've connected this uh camera
16:54
up I found some drivers on the disc that
16:56
seem to work under Windows 7 couldn't
16:58
get to install under Windows 11 though
17:01
um in the absence of any sort of lens or
17:03
being bothered to the proper amount I've
17:04
just put some f over it and put a little
17:06
pin in there to make a pinhole lens and
17:08
software gives a few options I'm not
17:11
entirely sure what all these are there's
17:12
obviously a clock frequency 22 MHz low
17:15
gain and with PFG no idea what that is
17:19
something something game programmable
17:20
Something game perhaps ver exposure
17:23
types I think focus is just like a
17:25
continuous grab until you tell it to
17:27
stop not entirely sure all these options
17:30
are obviously exposure time uh triggers
17:33
there ex external hardware trigger inut
17:35
you just trigger using a um thing on
17:37
screen so the resolution is 8176 by
17:40
6132 and you can actually bin those
17:42
where you combine multiple pixels to get
17:46
increased gain at the expense of lower
17:48
resolution down this is a 10sec exposure
17:51
obviously of the pin hole it's very uh
17:53
intensitive so we just stand still now
17:56
downloading it there's the uh exposure
17:59
so when it's
18:01
um there's a little status thing down
18:03
here so that tells you the um exposure
18:07
[Applause]
18:09
time it's this is just it
18:15
downloading um it is quite I'm seeing
18:18
quite a lot like smearing I think that I
18:20
don't know whether that's just due to
18:21
pixels overloading or something else I
18:24
mean yeah it's not it's not um out of
18:26
the question that there's something not
18:27
totally right about this camera
18:28
certainly was bodge wise on there um I
18:31
don't I'd imagine a camera like this
18:32
it's got a fairly narrow range of
18:34
intensities that it's happy with I'm not
18:36
going to spend a great deal of time on
18:38
this if you're interested in this camera
18:40
maybe for astronomy or something and
18:42
happy to sort of take the risk of it may
18:44
not be uh perfect I'll um I think I'll
18:47
stick this on eBay along with the
18:48
Illuminator I'll put a link down in the
18:50
description to the listing take your
18:52
chances to grab a bargain so for example
18:54
here we see this vertical streaking so
18:56
I'm not sure how normal that is this is
18:58
on fairly bright scene looking out the
19:02
window if I cut the exposure time down
19:04
on that it's now 1 second
19:07
exposure again most of the image
19:09
disappears again this is looks like it's
19:11
possibly over still overloading here go
19:14
that go down to say say quarter a
19:16
second so again I think there might be
19:19
some Auto gain control going on here um
19:21
this is with the PFG option let's try
19:23
turning that off and see what
19:25
happens so I'm not sure this is actually
19:27
more streaking or which just it's
19:29
cranked up the gain all the dis display
19:31
gray scale to show what um you know the
19:33
range of things that it's captured
19:36
there's one of one of 12 things in the
19:38
software there's um you can see of you
19:40
can't seem to read out the temperature
19:42
of the pelta cooler but you can set the
19:44
temperature and if you said it's a
19:46
different temperature you see the power
19:48
consumption jump up running the cooler
19:50
to get the temperature you requested but
19:52
I can't see anything anywhere that tells
19:54
you whether the cool is at the at the
19:56
temperature other than the power
19:57
consumption going down and there's no
19:59
temperature read out
20:03
here and just some yeah this is just
20:05
sort of very basic software I'm sure
20:07
there's like an API for more
20:09
sophisticated
20:10
applications but so if you know anything
20:12
more about these cameras please um stick
20:14
in the
20:15
comments um incidentally when I was
20:18
editing I didn't notice there was a bent
20:19
pin on the um CCD but I did fix that
20:22
before doing these tests and also
20:24
reactivated the um silica gel desicant
20:26
cuz I noticed it was uh I was getting
20:28
bit of condensation on the window but um
20:31
yeah so a couple of uh interesting but
20:34
maybe not particularly uh useful pieces
20:37
of Kit except for someone that's got a
20:38
very specific use so um I'll stick a
20:42
I'll stick these on eBay put a link in
20:44
the description and say hopefully
20:45
someone could actually make some uh good
20:47
use of these things
Example Output:
**Abstract:**
This video presents Part 2 of a teardown focusing on the optical components of a Fluidigm Polaris biotechnology instrument, specifically the multi-wavelength illuminator and the high-resolution CCD camera.
The Lumen Dynamics illuminator unit is examined in detail, revealing its construction using multiple high-power LEDs (430nm, 475nm, 520nm, 575nm, 630nm) combined via dichroic mirrors and filters. A square fiber optic rod is used to homogenize the light. A notable finding is the use of a phosphor-converted white LED filtered to achieve the 575nm output. The unit features simple TTL activation for each color, conduction cooling, and internal homogenization optics. Analysis of its EEPROM suggests extremely low operational runtime.
The camera module teardown showcases a 50 Megapixel ON Semiconductor KAF-50100 CCD sensor with micro-lenses, cooled by a multi-stage Peltier stack. The control electronics include an FPGA and a USB interface. Significant post-manufacturing modifications ("bodges") are observed on the camera's circuit boards. Basic functional testing using vendor software and a pinhole lens confirms image capture but reveals prominent vertical streaking artifacts, the cause of which remains uncertain (potential overload, readout artifact, or fault).
**Exploring the Fluidigm Polaris: A Detailed Look at its High-End Optics and Camera System**
* **0:00 High-End Optics:** The system utilizes heavy, high-quality lenses and mirrors for precise imaging, weighing around 4 kilos each.
* **0:49 Narrow Band Filters:** A filter wheel with five narrow band filters (488, 525, 570, 630, and 700 nm) ensures accurate fluorescence detection and rejection of excitation light.
* **2:01 Customizable Illumination:** The Lumen Dynamics light source offers five individually controllable LED wavelengths (430, 475, 520, 575, 630 nm) with varying power outputs. The 575nm yellow LED is uniquely achieved using a white LED with filtering.
* **3:45 TTL Control:** The light source is controlled via a simple TTL interface, enabling easy on/off switching for each LED color.
* **12:55 Sophisticated Camera:** The system includes a 50-megapixel Kodak KAI-50100 CCD camera with a Peltier cooling system for reduced noise.
* **14:54 High-Speed Data Transfer:** The camera features dual analog-to-digital converters to manage the high data throughput of the 50-megapixel sensor, which is effectively two 25-megapixel sensors operating in parallel.
* **18:11 Possible Issues:** The video creator noted some potential issues with the camera, including image smearing.
* **18:11 Limited Dynamic Range:** The camera's sensor has a limited dynamic range, making it potentially challenging to capture scenes with a wide range of brightness levels.
* **11:45 Low Runtime:** Internal data suggests the system has seen minimal usage, with only 20 minutes of recorded runtime for the green LED.
* **20:38 Availability on eBay:** Both the illuminator and camera are expected to be listed for sale on eBay.
Here is the real transcript. What would be a good group of people to review this topic? Please summarize provide a summary like they would:
Claude Opus 4.6: The Biggest AI Jump I've Covered--It's Not Close. (Here's What You Need to Know)
AI News & Strategy Daily | Nate B Jones
180K subscribers
3.2K
Share
Ask
Save
49,608 views Feb 11, 2026 SEATTLE
My site: https://natebjones.com
Full Story w/ Prompts: https://natesnewsletter.substack.com/...
________________________________________
What's really happening with AI agent capabilities after Opus 4.6? The common story is that autonomous coding improves incrementally—but the reality is more complicated when 16 agents just coded for two weeks straight and delivered a working C compiler.
In this video, I share the inside scoop on why the jump from 30 minutes to two weeks of autonomous coding is a phase change, not a trend line:
• Why the 5x context window matters less than the 76% needle-in-haystack retrieval score
• How Rakuten's Opus 4.6 deployment managed 50 engineers and closed issues autonomously
• What 500 zero-day vulnerabilities discovered without instructions reveals about reasoning
• Where agent teams and hierarchical coordination emerged as structural, not cultural
For knowledge workers watching this unfold, the question has changed from whether to adopt AI to what your agent-to-human ratio should be—and what each human needs to be excellent at to make it work.
Chapters
00:00 16 Agents Coded a C Compiler in Two Weeks
01:26 30 Minutes to Two Weeks in 12 Months
02:54 Opus 4.6: 5x Context Window Expansion
05:02 The Real Number: Needle-in-Haystack Retrieval
07:03 Holistic Code Awareness Like a Senior Engineer
08:42 Rakuten: AI Managing 50 Developers
13:09 Agent Teams: Hierarchy as Emergent Property
16:01 500 Zero-Day Vulnerabilities Found Autonomously
19:17 The Skeptics and Reddit Reactions
21:27 Non-Engineers Building Software in an Hour
23:32 Vibe Working: Describing Outcomes, Not Process
25:55 Revenue Per Employee at AI-Native Companies
29:29 The Billion-Dollar Solo Founder Prediction
30:24 The Trajectory From Here
Subscribe for daily AI strategy and news.
For deeper playbooks and analysis: https://natesnewsletter.substack.com/
Claude Opus 4.6 represents a massive leap in AI, detailed through real-world examples. Explore how AI agents autonomously coded for two weeks straight, and how this impacts various industries. Discover surprising new capabilities, like managing teams of 50 developers.
Summary
Ask
Get answers, explore topics, and more
Ask questions
Chapters
View all
Transcript
Follow along using the transcript.
Show transcript
AI News & Strategy Daily | Nate B Jones
180K subscribers
Videos
About
634 Comments
Add a comment...
Pinned by @NateBJones
@NateBJones
5 hours ago
Full Story w/ Prompts: https://natesnewsletter.substack.com/p/january-is-already-obsolete-my-honest?r=1z4sm5&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
8
Reply
7 replies
@JViz
4 hours ago
I think this channel is my favorite harbinger of doom.
220
Reply
17 replies
@luisoyoutube
52 minutes ago
Instead of vibe coding you convinced me to become a farmer instead .
9
Reply
@shanghai_noon
5 hours ago
8:42 "not the career development conversation" - at this rate, no one needs those conversations, as there won't be a career there any more
46
Reply
1 reply
@frankjohannessen6383
4 hours ago
developer to agent ratio = developer to unemployed developer ratio
20
Reply
@ModifiLifetoday
4 hours ago
Soon the interview question won't be what can you do but what can your ai agent team do
44
Reply
6 replies
@pumpedupbro4200
50 minutes ago
I got all my day's work finished in 1 second and now I'm in Caribbean chilling
3
Reply
@michaelvarney.
3 hours ago
“That is not a trend line: That is a phase change”
God I can just hear the em dashes in your script.
29
Reply
3 replies
@mc.guffin
15 seconds ago
A C compiler is one of those things that sounds impressive but there are crazy detailed requirements for it to check its work against.
Reply
@markboggs746
4 hours ago (edited)
It was able to build the 32 and 64 bit compiler, but not the 16 bit version. I assume that is because there are not enough 16 bit compilers source code in its training set for it to copy. What does this prove?
44
Reply
14 replies
@Nas_Atlas
4 hours ago
Clickbait. No seahawks beanie in the thumbnail
39
Reply
1 reply
@ChrisPritchardNZ
14 seconds ago
come on guys, that c compiler was crap: https://harshanu.space/en/tech/ccc-vs-gcc/. absolutely not 'fully functional' - just translating c to assembly instructions like your average 101 cs project
Reply
@benarcher372
5 minutes ago
So much info. This could easily evolve into one of my ten fav channels! Will look into the previous vids. Thanks you.
Reply
@PostcardsFromJapan
5 hours ago
I am now convinced that Nate is actually an AI. How else can you keep up with everything that’s going on so fast and on a daily basis?
88
Reply
·
14 replies
@TheOneSwissPanda
3 hours ago
@NateBJones I’ve been seriously using Claude since version 4.5, and I’m so glad I found your channel not too long ago. Even these few months don’t feel like a journey — they already feel like time travel. Yet every time, your comments and assessments are simultaneously a wake-up call, a validation, a status check, and a warning all in one. So a special thank you for your contribution on 4.6. I’m still somewhat in shock over what happened a few days ago — Back to the Future. Many thanks from across the pond.
3
Reply
@wittjeff
4 hours ago
I think you missed something big with the jump in runtime: Anthropic is making maximal use of the behavior patterns of their users in order to train the model to do that thing next without prompting. That's how the model knew to evaluate the git history when asked to do something related to security auditing.
9
Reply
1 reply
@grzegorzk5149
3 hours ago
i have a max x20 subscription and they gave me 50usd credit to use it. it is not included in the plan like all other models. it is basically unusable that way. ill burn this 50usd like a cigarette because api usage is super expensive for claude
6
Reply
@edlawn5481
27 minutes ago
Imagine inputting a movie screenplay, and then a couple hours later, you have a completed movie.
Reply
@markmuller7962
3 hours ago
"A race out of madmax" That was actually funny to picture in my mind
Reply
@hiljanc
1 hour ago
Thanks you, I was wondering why my workflow changed so much with 4.6
1
Reply
In this video
Chapters
Transcript
16 Agents Coded a C Compiler in Two Weeks
0:00
Claude Opus 4.6 just dropped and it changed the AI agent game again because
0:05
16 Claude Opus 4.6 agents just coded and set the record for the length of time
0:12
that an AI agent has coded autonomously. They coded for two weeks straight. No human writing the code and they
0:18
delivered a fully functional C compiler. For for reference, that is over a 100,000 lines of code in Rust. It can
0:25
build the Linux kernel on three different architectures. It passes 99% of a special quote torture test suite
0:32
developed for compilers. It compiles Postgress. It compiles a bunch of other things. And it cost only $20,000 to
0:38
build, which sounds like a lot for you and me, but it's not a lot if you're thinking about how much human equivalent
0:43
work it would cost to write a new compiler. I keep saying we're moving fast, and it's even hard for me to keep
0:50
up. A year ago, autonomous AI coding could top out at barely 30 minutes before the model lost the thread. Barely
0:56
30 minutes. And now we're at 2 weeks. Just last summer, Rock 10 got 7 hours out of Claude and everybody thought it
1:03
was incredible. 30 minutes to 2 weeks in 12 months. That is not a trend line.
1:08
That is a phase change. The entire world is shifting. Even one of the anthropic researchers involved in the project
1:15
admitted what we're all thinking. I did not expect this to be anywhere near possible so early in 2026. Opus 4.6
1:24
shipped on February 5th. It has been just over a week. And the version of cutting edge that existed in January,
30 Minutes to Two Weeks in 12 Months
1:30
just a few weeks ago, that already feels like a lifetime ago. Here's how fast things are changing. Just in Anthropic's
1:38
own road map, Opus 4.5, shipped in November of 2025, just a couple of months ago. It was Anthropic's most
1:44
capable model at the time. It was good on reasoning, good at code, reliable against long documents. It was roughly
1:50
the state-of-the-art. Just a few months later, Opus 4.6 6 shipped with a 5x
1:55
expansion in the context window versus Opus 4.5. That means it went from 200,000 tokens to a million. Opus 4.6
2:03
shipped with the ability to hold roughly 50,000 lines of code in a single context
2:09
session in in its head, so to speak, up from 10,000 previously with Opus 4.5.
2:15
That is a 4x improvement in coder document retrieval over just a couple of months. The benchmarks measures are off
2:22
the charts. And you guys know I don't pay a lot of attention to benchmarks, but when you see something like nearly
2:28
doubled reasoning capacity on the ARC AGI2 measure, you got to pay attention. It shows you how fast things are moving,
2:34
even if you don't entirely buy the benchmark itself. And Opus 4.6 adds a
2:39
new capability that did not exist at all in January. Agent teams. Multiple
2:45
instances of cloud code autonomously working together as one with a lead agent coordinating the work. specialist
2:52
handling subsystems and direct peer-to-peer messaging between agents. That's not a metaphor for collaboration.
Opus 4.6: 5x Context Window Expansion
2:58
That is automatic actual collaboration between autonomous software agents in an
3:03
enterprise system. All of this in just a couple of months. The pace of change in AI is a phrase that people keep
3:09
repeating and they don't really internalize what it means. This is what it means. The tools that you mastered in
3:15
January are a different generation from the tools that shipped this week. It's not a minor update, people. It is an
3:22
entirely different generation. Your January mental model of what AI can and cannot do is already wrong. I was
3:28
texting with a friend just this past week, and he was telling me about the Rockin results in 7 hours. And I had to
3:35
tell him, I know you think you're up to date, but the record is now 2 weeks. And by the way, Rockuten using Opus 4.6 was
3:43
able to have the AI manage 50 developers. That is how fast we're moving. that AI can boss 50 engineers
3:49
around. Now, the 5x context window is the number anthropic put in the press release. It's the wrong number to focus
3:55
on. The right number is a benchmark originally developed by OpenAI called the MRCV2
4:02
score. That sounds like a mouthful and it's used to measure something that matters enormously that nobody was
4:07
testing properly. Can a model retrieve and use the information inside a long context window? In other words, can you
4:14
find a needle in the haststack? It's not about whether you can quote unquote put a million tokens into the context
4:19
window. Every major model can accept big context windows in January 2026. The
4:25
question is whether the model can find, retrieve, and use what you put in there. That is what matters. Sonnet 4.5, which
4:31
was a great model from Claude just a few months ago, does have a million token window, but the ability to find that
4:38
needle in the haststack was very low. About one chance in five or 18.5%. Gemini 3 Pro a little bit better at
4:45
finding that needle in the haststack across its context window about one chance in four 26.3%.
4:51
These were the best available in January. They could hold your codebase. They couldn't reliably read it. The
4:57
context window was like a filing cabinet with no index. Documents went in, but retrieving them was kind of a random
The Real Number: Needle-in-Haystack Retrieval
5:04
guess past the first quarter of the content. Guess what? Guess what? Opus 4.6 six at a million tokens has a 76%
5:12
chance of finding that needle in the hay stack. At 256,000 tokens or a quarter of the context window, that rises to 93%.
5:20
That is the number that matters. That is why 4.6 feels like such a giant leap. It's not because of the benchmark score.
5:27
It's because there's a massive difference between a model that can hold 50,000 lines of code and a model that
5:33
can hold them 50,000 lines of code and know what's on every line all at the same time. This is the difference
5:40
between a model that sees one file at a time and a model that holds the entire system in its head simultaneously. Every
5:47
import, every dependency, every interaction between modules, all visible at once. A senior engineer working on a
5:54
large codebase carries a mental model of the whole system and they know that changing the O module can break the
5:59
session handler. They know the rate limiter shares state with the load balancer. It's not because they looked it up. It's because they've lived in the
6:05
code long enough that the architecture becomes a matter of intuition, not a matter of documentation. That holistic
6:11
awareness is often what separates a senior engineer from a contractor reading the codebase for the first time.
6:16
Opus 4.6 can do this for 50,000 lines of code simultaneously. Not by summarizing,
6:23
not by searching and not with years of experience. It just holds the entire context and reasons across it the way a
6:29
human mind does with a system it knows very very deeply. And because working memory has improved this dramatically in
6:35
the span of just a couple of months, it's actually not hard to see where the trajectory is going to go from here. The
6:40
C compiler project 100,000 lines in Rust did require 16 parallel agents precisely
6:46
because even a million token context window can't hold that whole project at once in its head. But at the current
6:52
rate of improvement, it won't require 16 agents for long. Let me tell you more about the Rockuten story with the 50
6:58
developers. Now, Rocket is a Japanese ecom and fintech conglomerate, and they deployed clawed code across their
Holistic Code Awareness Like a Senior Engineer
7:04
engineering org, not as a pilot, but in production, handling real work and touching real code that ships to real
7:11
users. Use Kaji, Rakuten's general manager for AI, reported what happened when they put Opus 4.6 on their issue
7:18
tracker. Clawed Opus 4.6 closed 13 issues itself. It assigned 12 issues to
7:24
the right team members across a team of 50 in a single day. It effectively managed a 50 person org across six
7:30
separate code repositories and also knew when to escalate to a human. It wasn't
7:36
that the AI helped the engineer close the tickets. I want to be clear about that. It closed issues autonomously. It
7:41
did the work of an individual contributor engineer. It also routed work correctly across a 50 person org.
7:48
The model understood not just the code but the org chart. Which team owns which
7:53
repo? which engineer has context on which subsystem, what closes versus what needs to escalate. That's not just code
8:00
intelligence, that is management intelligence. And a system that can route engineering work correctly is a
8:06
system that understands organizational dependencies the way a human lead understands them. Which means the
8:12
coordination function that engineering managers spend half their time on just became automatable in a couple of
8:18
months. Think about the cost structure that implies. A senior engineering manager at a company like Rakuten might
8:25
cost a quarter million dollars a year fully loaded, maybe more. A meaningful part of their job, ticket triage, work
8:32
routing, dependency tracking, cross team coordination. That is exactly what Opus
8:37
4.6 demonstrated it could handle. Not the judgment calls about what to build next, not the career development
Rakuten: AI Managing 50 Developers
8:42
conversation, and it wasn't done over weeks and weeks and weeks, but the fact that it can do operational coordination
8:49
that typically takes 15 to 20 hours a week and demonstrated it could do it for a full day, it shows you where things
8:56
are going. And the broader numbers tell the same story. It is common now to see hours and hours and hours of sustained
9:02
autonomous coding for individuals who are playing with this. not in the controlled enterprise environment even
9:07
people can kick off multi-hour long coding sessions and just walk away and do other things and come back and see
9:14
fully working software. That is no longer an unusual thing in February 2026. And Rockwood isn't stopping here.
9:21
They're building an ambient agent that breaks down complicated tasks into 24 parallel clawed code sessions. Each
9:28
single one handling a different slice of their massive mono repo. A month of human engineering is generating a
9:35
simultaneously running 24 agent stream that helps them to build and catch issues and that's in production. Now the
9:42
detail that gets buried under these big numbers might be more interesting than all of the numbers themselves because
9:48
non-technical employees at Rockoten are able to use that system to contribute to
9:53
development through the cloud code terminal interface. That is right. The terminal is not just for engineers
10:00
anymore. People who have never written code are able to ship features because of the work Rockin has done to integrate
10:06
cloud code. So the boundary between technical and non-technical, it keeps breaking that down. The distinction that
10:11
has organized knowledge worker hiring and compensation for 30 years is dissolving in a matter of months. It's
10:18
not dissolving at the speed of your ability to deploy a multi-month project and is not dissolving at the speed it
10:25
takes to retrain humans. That is why this is shocking. This is all happening faster than we can adjust to it. One of
10:31
the features that is most hard for us to wrap our minds around is the agent teams feature that Opus 4.6 shipped. Tropic
10:38
calls them team swarms internally, which is a little scary and I can see why the marketing team changed that. But the
10:43
name is accurate. It's not a marketing term. It's an architecture. Multiple instances of clawed code are architected
10:50
to run simultaneously. Every single one in its own context window. And they coordinate through a shared task system
10:56
that has three simple states, right? Pending, in progress, and incompleted. Pending, in progress, and completed. One
11:02
instance of Cloud Code is going to act as your lead developer. It will decompose the project into work items and assign them to specialists, track
11:09
dependencies, and unlock bottlenecks. This is just like what Opus 4.6 did for
11:14
those 50 developers. The specialist agents work independently, and when they need something, they don't just go
11:20
through the lead, by the way. They can message each other directly. peer-to-peer coordination. It's not hub and spoke. There's a front-end agent,
11:27
there's a back-end agent, there's a testing agent. Effectively, they are recreating the entire software
11:32
engineering org inside claude code team swarms. And this is how that C compiler got built. It's not one model doing
11:39
everything sequentially, right? It's 16 agents that worked in parallel. Some building the parser, some building the
11:45
code generator, some building the optimizer. And they all coordinated through the same kinds of structures
11:50
that existing human engineering teams use, except they work 24 hours a day.
11:56
They don't have standups, and they resolve coordinations through direct messaging rather than waiting for the
12:01
next sprint planning session. One of the running questions in AI has been whether agents will reinvent management. I think
12:07
this argues strongly that they did. Curser's autonomous agent swarm independently organized itself into
12:13
hierarchical structures and strong DM published a production framework called software factory that's built around
12:18
exactly the same hierarchical pattern. And now anthropic has shipped a feature with 13 distinct operations from
12:24
spawning managing to coordinating agents. This is not really coincidence. It's essentially convergent evolution in
12:30
AI. Hierarchy isn't a human organizational choice imposed on systems
12:36
to ma imposed on systems to maintain control. It's an emergent property of
12:41
coordinating multiple intelligent agents on complicated tasks. Humans invented management because management is what
12:47
intelligence does when it needs to coordinate at scale. AI agents effectively discovered the same thing
12:52
because the constraints are structural. They're not cultural. You need someone to track dependencies, right? You need
12:58
specialists. You need communication channels. You need a shared understanding of what has been done and what hasn't yet been done. We did not
13:04
impose management on AI. AI effectively discovered management and we helped to
Agent Teams: Hierarchy as Emergent Property
13:10
build the structure and Opus 4.6 is the first model that ships with the infrastructure to run all of this as
13:16
just another feature. On the same day Opus 4.6 launched, Enthropic published a result that got much less attention than
13:23
that C compiler story, but it might matter more in the long run. They gave Opus 4.6 six basic tools, Python,
13:29
debuggers, fuzzers, and they pointed it at an open-source codebase. There were no specific vulnerability hunting
13:36
instructions. There were no curated targets. This wasn't a fake test. They just said, "Here's some tools. Here's
13:42
some code. Can you find the problems?" It found over 500 previously unknown
13:48
high severity, what's called zeroday vulnerabilities, which means fix it right now. 500 in code that had been
13:55
reviewed by human security researchers scanned by existing automated tools
14:00
deployed in production systems used by millions of us. Code that the security
14:06
community had considered audited with when traditional fuzzing by the way
14:11
fuzzing is the fancy technical word for finding bugs and making sure you check all the code. It's like fuzzing your hand through the carpet and finding a
14:18
pin or something. And when manual analysis failed, using a tool called ghost script, which is what you use to
14:25
check these things, the model independently decided to analyze it a different way, going directly to the
14:30
project's git history. That's right. It worked around obstacles and it read through years of commit logs to
14:36
understand the codebase's evolution. Nobody told it to do this. It just decided to do it. And it identified
14:42
areas where security relevant changes had been made hastily or incompletely all on its own. It invented a detection
14:49
methodology that no one had told it to use. It reasoned about the code's history, not just about its current
14:56
state. And it used that understanding of time to find vulnerabilities that static
15:01
analysis could not reach. Humans didn't do this. This is why it found the bugs it found. This is what happens when
15:08
reasoning meets working memory. The model doesn't scan for known patterns the way existing tools do. It builds a
15:13
mental model and I think that's the only metaphor that works at this point of how the code works, how data flows, where
15:20
trust boundaries exist, where assumptions get made and where they might break down. And then it probes the
15:26
weak spots with the creativity of a researcher and the patience of a machine that never gets tired of reading commit
15:32
logs. And I guarantee you, human engineers get tired of that. The security implications alone would
15:38
justify calling Opus 4.6 a generational release. And yet again, I remind you, it's only been a couple of months.
15:44
There'll be another one in a couple of months. But this was not the headline feature of Opus 4.6. As exciting as it
15:49
is, it wasn't even the second headline feature. 500 zero days was the side
15:54
demonstration. That is the density of capability improvement that has been packed into a single model update
500 Zero-Day Vulnerabilities Found Autonomously
16:01
shipped on a Wednesday, February. Look, there are skeptics for every release and there were skeptics for 4.6 as well. And
16:08
the skepticism tracks historically. AI benchmark improvements have underdelivered before and repeatedly for
16:14
years. And that is why I don't depend a lot on benchmarks. Sure enough, within hours of launch, thread started to
16:20
appear on the cloud subreddit. Is Opus 4.6 labbotomized? Is it nerfed? The
16:26
pattern seems to repeat with every major model release. Power users who have fine-tuned their workflows for the
16:32
previous version discover the new version handle certain tasks differently. The Reddit consensus, for what it's worth, has decided that 4.6
16:38
six is better at coding and worse at writing. I haven't found that personally, but dismissing it would also
16:44
probably be dishonest. Model releases involve trade-offs. Model releases often involve changes to the agent harness,
16:51
which is all of the system prompting that the model makers put in that goes around the deployment that they don't
16:57
talk about and they don't release. We don't know how it changed. We feel the change when we work with the system. So,
17:02
I'm sure it's possible that if you were a Reddit user who was used to a special prompt pattern that worked on Opus 4.5,
17:10
you might indeed be frustrated that that pattern did not work on a much more capable model overall. So, I get the
17:16
skepticism. I also get that people are tired of hearing the world changing every couple of months. It is exhausting
17:22
to keep up with. But that doesn't mean it's not real. And that is part of why I'm telling so many specific stories.
17:28
It's important not to just look at the headlines. It's important not to look at some number changing on some stupid
17:33
benchmark. It's important to hear the stories of how AI is actually changing in production now. So what does this
17:40
feel like if you are not an engineer? What does this feel like if you don't write code? Cuz the C compiler, let's be
17:46
honest, it's a developer story. The benchmarks are developer metrics. But the change underneath, what makes 4.6
17:54
special isn't about developers per se. It's about what happens when AI can sustain complicated work for hours and
18:00
days instead of minutes. Two CNBC reporters, Dear Drabosa and Jasmine Woo, they're not engineers, right? They're
18:06
reporters. They sat down with Claude Co-work and they asked it to build them a Monday.com replacement. That's the
18:12
project management tool, right? A project management dashboard that had calendar views. It had email integration, task boards, team
18:19
coordination features. This is the product that monday.com has spent years and hundreds of millions of dollars
18:25
building. It s currently supports a $5 billion market cap for monday.com. It
18:31
took these reporters under an hour. Total compute cost somewhere between $5 and $15. I hasten to add that is not the
18:39
same thing as rebuilding monday.com. This was personal software. It's not deployed. It's not for sale. It was just
18:46
for them. So yes, it is a big deal. Yes, it is a generational change in our
18:51
ability for nontechnical people to make software. No, I am not saying that dear
18:57
Drebosa and Jasmine Woo can refound monday.com for $10. The real story is
19:02
that AI can build the tools you use, the software you pay per seats for, the dashboards your company spent 6 months
19:09
specking out with a vendor, an AI agent can build a working version of that in an afternoon, and you don't need to
19:15
write a line of code to make it happen. Yes, it might just be for you. It's a whole new category, people. It's called
The Skeptics and Reddit Reactions
19:21
personal software. It didn't exist just a few months ago. It is now increasingly easy to make that happen. Our daily
19:28
experience with AI is changing in ways that are really difficult to benchmark, but that doesn't mean that they're not
19:34
structural. A marketing ops team using Claude Co-work can do content audits in
19:39
just a few minutes instead of hours and hours. A finance analyst running due diligence doesn't take a day to do it
19:46
because the model can read the document set, identify the risks, and produce lawyer ready redlines in just a few
19:52
minutes. Our rhythm of work is different now. We can dispatch five different
19:57
tasks in a few minutes on Claude Co-work. We can dispatch a PowerPoint deck, a financial model, a research
20:03
synthesis, two data analyses, right? walk away, you can grab a cup of coffee, and you can come back and the
20:09
deliverables are just done. They're not drafts anymore. It's just finished work.
20:15
It's mostly formatted, right? The pattern that's emerging for non-technical users is what anthropic
20:20
Scott White calls quote vibe working. You describe the outcomes, not the process. You don't you don't tell the AI
20:26
how to build the spreadsheet. You tell it what the spreadsheet needs to show. It figures out the formulas. It figures
20:33
out the formatting. It figures out the data connections. The shift is coming for all of us and it's going from
20:39
operating tools to directing agents. And the skill that matters now is not technical proficiency. It's clarity of
20:46
our intent. Knowing what you want, being able to articulate the real requirement,
20:52
not just your surface request. That is becoming the bottleneck. Ironically, it's the same bottleneck the developers
20:58
are hitting, but from a different direction. The C compiler agents didn't need anyone to write code for them. They
21:04
needed someone to specify what a C compiler means precisely enough that 16 agents could coordinate on building one.
21:11
The marketing team doesn't need someone to operate their analytics platform anymore. They need someone who knows
21:16
which metrics matter and can explain why. The leverage across the board has shifted from execution to judgment
21:23
across every function. Whether you write code or not, if you lead an organization, the number that should
Non-Engineers Building Software in an Hour
21:28
restructure your planning isn't measured in weeks or days. It's actually measured in revenue per employee. Purser, the AI
21:35
coding tool hit $und00 million in annual recurring revenue with about 20 people.
21:40
That's $5 million per employee. Midjourney generated 200 million people.
21:45
Midjourney generated $200 million with about 40 people. Lovable, the AI app builder, they reached $200 million in 8
21:52
months with 15 people. For traditional SAS companies, $300,000 in revenue per
21:58
employee is considered excellent and $600,000 is considered elite. That would
22:03
be notion. AI native companies are running at five to seven times that number. Not because they found better
22:09
people necessarily, but because their people orchestrate agents instead of doing the execution themselves. McKenzie
22:15
published a framework not for others, but for themselves last month. They're targeting parody, matching the number of
22:22
AI agents at McKenzie to human workers across the firm by the end of 2026. This
22:28
is the company that sells organizational design to every Fortune 500 on Earth. And they're saying the org chart is
22:35
about to flip. The pattern is visible at startups, too. Jacob Bank.
22:40
The pattern is visible at startups, too. Jacob Bank runs a million-doll marketing operation with zero employees and
22:47
roughly 40 AI agents. Micro One conducts 3,000 AI powered interviews every single
22:52
day, handled at a tiny fraction of the headcount that enterprise recruiting firms need floors of people to do. Three
22:59
developers in London built a complete business banking platform in 6 months. A project that would have required 20
23:06
engineers in 18 months before AI. Amazon's famous two pizza team formula, the idea that no team should be larger
23:13
than what two pizzas can feed, is evolving into something even smaller. The emerging model at startups is now
23:19
two to three humans plus a fleet of specialized agents all organized not by function but by outcome. The humans
23:26
regardless of what their job title says and it increasingly doesn't matter set direction, evaluate quality and make
Vibe Working: Describing Outcomes, Not Process
23:32
judgment calls. The agents execute, coordinate, and scale. The org chart stops being a hierarchy of people and it
23:39
becomes a map of human agent teams each owning a complete workflow end to end.
23:45
For leaders, this changes the fundamental equation we've been working with for a long time. It's not about how
23:50
many people do we need to hire now. It's about how many agents per person is the right ratio and what does each person
23:57
need to be really excellent at to make that ratio work. The answer to the second question is really the same thing
24:03
that's distinguished really excellent people for a long time in software. It's great judgment. It's what we call taste,
24:09
which is vague, but typically means deeply understanding what the customer wants and being able to build it. It's
24:15
about domain expertise. It's the about the ability to know whether the output is actually really, really good. And
24:21
those skills now have 100 orex leverage because they are multiplied by the number of agents that person can direct
24:28
and drive against. out. Daario Amade, anthropic CEO, has set the odds on a billion dollar solo founded company
24:35
emerging by the end of 2026 at between 70 and 80%. Think about it. He thinks
24:41
there's a 75% chance that there will be a billion dollar solo founded company by
24:46
the end of this year. Sam Alman apparently has a betting pool among tech CEOs on the exact same question. Now,
24:53
whether or not you believe that version, the direction is undeniable. The relationship between headcount and
24:59
output is broken. And the organizations that figure out the new ratio first are going to outrun everybody who is still
25:06
assuming they need dozens of developers to do one major software project. If you follow the trajectory that Opus 4.6 set,
25:14
by 2026, June, July, August, somewhere in there, I would expect agents working
25:19
autonomously for weeks to become routine rather than remarkable. By the end of
25:24
the year, we are likely to see agents building full applications over potentially a month or more at a time.
25:31
Not toy applications, real production systems with real architecture decisions complete with security reviews, with
25:38
test suites, with documentation, all handled by agent teams. The trajectory from hours to two weeks took just 3
25:45
months. The trajectory from two weeks to months, that's coming soon. And the inference demand that this generates
25:51
agents consuming tokens continuously around the clock across thousands of parallel sessions. Companies are not
Revenue Per Employee at AI-Native Companies
25:57
ready for this. This is what makes the $650 billion in hypers scale infrastructure look conservative rather
26:04
than insane. Those data centers are not being built for chatbots people. They're being built for agent swarms running at
26:10
a scale that people have had difficulty modeling or wrapping their heads around. Opus 4.6 gives us a sense of that
26:17
future. So, what can you do about it? If you're sitting here thinking, "Oh my gosh, this is too much. It is coming too
26:23
fast." You're not alone. You're not alone. You can do this. If you write
26:28
code, try running a multi- aent session on real work, not a toy problem, a piece of your codebase with real technical
26:34
depth. Watch how the agents coordinate. That experience is going to change your mental model of what agents can do in a
26:41
way that matters much more than anything else. Because increasingly the way we work is the bottleneck for AI. If we
26:48
want to go faster and build more, if we want to feel like we have the ability to
26:54
do production work at the speed that AI demands because increasingly that's what they're going to expect from humans. I
27:00
got to say, if we want to be ready for the future, best way we can do it is to change our mental models. If you don't
27:07
write code, open up Claude Co-work. handed a task you've been procrastinating on that's that's felt
27:12
really hard, right? A competitive analysis task, maybe a financial model task, a content audit across last
27:18
quarter's output. Just describe the outcome you want, not the steps to get there. See what comes back. The gap
27:24
between what you expect and what you get is the gap between your current mental model and where the tools are today. And
27:31
for managers, look honestly at the 20 hours a week your team spends on operational coordination, ticket
27:37
routing, dependency tracking. ask how many of those hours really require excellent human judgment and which are
27:43
just pattern matching because I got to say your AI can probably take over a lot
27:48
of the coordination work already and if you run an organization if you're on the senior leadership team you got to
27:54
understand the question for your org has changed it's not about should we adopt AI or even which teams adopt it first
28:01
it's really what is our agentto human ratio and what does each human need to be excellent at to make that ratio work
28:07
and how do We support our humans to get there. The people working in knowledge work desperately need their leaders to
28:14
understand that humans need a ton of support to get through this change management and become a new kind of
28:21
worker that partners with AI. That is not an easy thing and most orgs are underinvesting in their people. I tell
28:27
people in AI that if you are on the cutting edge of AI, it feels like you're time traveling always because you look
28:34
at what's happening around you and then you go and you talk to people who haven't heard about it and they look at
28:39
you like you're crazy. They say, "No, you can't do that. You can't run 16 agents at a time and run a Rust
28:45
compiler. What do you mean?" And AI can manage 50 people. And when you tell them that's just a Wednesday in February and
28:52
more is coming soon, then they really roll their eyes. But welcome to February. This is where we are. AI
28:58
agents can build production grade compilers in just two weeks. They manage engineering orgs autonomously. They can
29:04
discover hundreds of security vulnerabilities that human researchers missed. They can build your competitor's product in an hour for the cost of your
29:11
lunch. They can coordinate in teams, resolve conflicts, and deliver at a level that did not exist 8 weeks ago.
29:17
None of this was possible in January. And we don't know where this stops. We just know it's going faster. That's the
29:24
tension underneath all of the benchmark scores, all the deployment numbers. The fact is the agents are here, they work,
The Billion-Dollar Solo Founder Prediction
29:30
and it's just getting faster from here. And we're not sure what happens next. The question I have for all of us is how
29:39
do we do a better job supporting each other in adjusting to what feels like a race out of MadMax some days? Welcome to
29:46
February. It's moving fast. If you're a people leader, you need to take time to
29:51
think about how to support your people to make it through this transition. If you are an individual contributor or a
29:56
manager, I am putting as many tools as I can up on the Substack to help you get
30:01
through this. But the best thing you can do, it's not about the Substack. I don't care. It's about you touching the AI and
30:09
getting hands-on and actually building or trying to build with an AI agent
30:15
system that launched not in January, not in December, but in February. And you
30:20
need to take that mindset forward every single month. In March, you should be touching an AI system that was built in
The Trajectory From Here
30:27
March. Every month now matters. Make sure that you don't miss it because our future as knowledge workers increasingly
30:34
depends on our ability to keep the pace and work with AI agents.