*AI Summary*
*# Step 1: Analyze and Adopt*
*Domain:* AI Software Engineering, Product Strategy, and Developer Tooling.
*Persona:* Senior AI Product Architect and Lead Systems Engineer.
---
### Step 2 & 3: Abstract and Summary
*Abstract:*
This transcript features Boris Cherny, the creator of Claude Code at Anthropic, discussing the development and strategic philosophy behind the agentic command-line interface (CLI) tool. Cherny outlines a "forward-compatible" product strategy—building for the capabilities of models six months in the future rather than current limitations. The discussion details the technical evolution of Claude Code from a simple API tester to a sophisticated agentic system utilizing subagents ("Mama Claude"), repo-level instructions (`CLAUDE.md`), and automated tool-use (bash, git, MCP). Key findings include a 150% increase in engineer productivity at Anthropic, the transition of coding from manual syntax entry to high-level system specification, and the eventual obsolescence of "Plan Mode" as model reasoning improves. Cherny also addresses the design constraints of the terminal and the broader shift from "Software Engineer" to "Builder" as coding becomes a commodity.
*The Evolution and Future of Agentic Coding: Insights from Boris Cherny*
* *01:45 Accidental Utility of the CLI:* Despite being intended as a starting point, the terminal remains the primary interface due to its efficiency and the "product overhang" where model capabilities exceed existing GUI tools.
* *02:38 Development Philosophy:* Anthropic’s core strategy is "building for the model of six months from now." Cherny advises founders to target frontiers where current models struggle, as those gaps will inevitably close.
* *05:38 The Power of Tool Use:* A pivotal moment occurred when the model (Sonnet 3.5) independently wrote AppleScript to query a local music player. This demonstrated that models are inherently "tool-seeking" entities.
* *07:51 Latent Demand & `CLAUDE.md`:* The `CLAUDE.md` file evolved from users manually feeding markdown instructions to the model. Cherny recommends keeping these files minimal and "deleting them to start fresh" with each new model to avoid over-engineering instructions that the model may no longer need.
* *12:55 Automated Debugging:* Advanced workflows involve models analyzing heap dumps and production logs via MCP (Model Context Protocol), often identifying memory leaks faster than senior human architects.
* *15:44 Beginner’s Mindset:* Cherny argues that "seniority" is being redefined. Traditional architectural opinions are often less relevant than the ability to think from first principles and adapt to rapidly improving model capabilities.
* *18:56 Generalists vs. Specialists:* Effective AI-augmented teams consist of "hyper-specialists" (deep system/runtime knowledge) and "hyper-generalists" who span product, design, and research.
* *21:51 Agent Topologies & Teams:* Claude Teams utilizes "uncorrelated context windows" to prevent context pollution. This multi-agent approach acts as a form of test-time compute, allowing swarms to build complex features (e.g., the plugins system) with minimal human intervention.
* *23:48 Recursive Subagents:* "Mama Claude" functions by recursively spawning subagents to handle parallel research or debugging tasks. Cherny notes that most agents are now prompted by other agents rather than humans.
* *25:12 The Obsolescence of "Plan Mode":* Plan Mode (a "please don't code yet" constraint) is predicted to have a limited lifespan as models gain the autonomy to decide when to plan versus execute.
* *30:57 Building for the "Model’s Will":* DevTool founders are encouraged to observe what the model *wants* to do and build technical solutions that serve both human users and agentic "latent demand."
* *32:11 TypeScript Parallels:* Cherny draws a comparison to the early days of TypeScript, which succeeded by being practical and mapping to how developers actually worked, rather than adhering to academic or "pure" functional programming ideals.
* *38:16 The Bitter Lesson & Scaffolding:* Anthropic avoids "scaffolding" (code built to prop up model weaknesses) that the next model iteration will likely render obsolete. General models consistently outperform specific, narrow code-based solutions over time.
* *40:31 Radical Productivity Gains:* Productivity per engineer at Anthropic has grown 150% since the release of Claude Code, with 70–90% of all code now written by the model. Cherny reports he has uninstalled his IDE and lands ~20 PRs per day using only the CLI.
* *45:33 Safety and Scaling (ASL-4):* The discussion concludes on AI Safety Levels. ASL-4 represents models capable of recursive self-improvement, necessitating strict criteria to prevent catastrophic misuse (e.g., biothreats or automated zero-day creation).
AI-generated summary created with gemini-3-flash-preview for free via RocketRecap-dot-com. (Input: 31,731 tokens, Output: 1,092 tokens, Est. cost: $0.0191).Below, I will provide input for an example video (comprising of title, description, and transcript, in this order) and the corresponding abstract and summary I expect. Afterward, I will provide a new transcript that I want a summarization in the same format.
**Please give an abstract of the transcript and then summarize the transcript in a self-contained bullet list format.** Include starting timestamps, important details and key takeaways.
Example Input:
Fluidigm Polaris Part 2- illuminator and camera
mikeselectricstuff
131K subscribers
Subscribed
369
Share
Download
Clip
Save
5,857 views Aug 26, 2024
Fluidigm Polaris part 1 : • Fluidigm Polaris (Part 1) - Biotech g...
Ebay listings: https://www.ebay.co.uk/usr/mikeselect...
Merch https://mikeselectricstuff.creator-sp...
Transcript
Follow along using the transcript.
Show transcript
mikeselectricstuff
131K subscribers
Videos
About
Support on Patreon
40 Comments
@robertwatsonbath
6 hours ago
Thanks Mike. Ooof! - with the level of bodgery going on around 15:48 I think shame would have made me do a board re spin, out of my own pocket if I had to.
1
Reply
@Muonium1
9 hours ago
The green LED looks different from the others and uses phosphor conversion because of the "green gap" problem where green InGaN emitters suffer efficiency droop at high currents. Phosphide based emitters don't start becoming efficient until around 600nm so also can't be used for high power green emitters. See the paper and plot by Matthias Auf der Maur in his 2015 paper on alloy fluctuations in InGaN as the cause of reduced external quantum efficiency at longer (green) wavelengths.
4
Reply
1 reply
@tafsirnahian669
10 hours ago (edited)
Can this be used as an astrophotography camera?
Reply
mikeselectricstuff
·
1 reply
@mikeselectricstuff
6 hours ago
Yes, but may need a shutter to avoid light during readout
Reply
@2010craggy
11 hours ago
Narrowband filters we use in Astronomy (Astrophotography) are sided- they work best passing light in one direction so I guess the arrows on the filter frames indicate which way round to install them in the filter wheel.
1
Reply
@vitukz
12 hours ago
A mate with Channel @extractions&ire could use it
2
Reply
@RobertGallop
19 hours ago
That LED module says it can go up to 28 amps!!! 21 amps for 100%. You should see what it does at 20 amps!
Reply
@Prophes0r
19 hours ago
I had an "Oh SHIT!" moment when I realized that the weird trapezoidal shape of that light guide was for keystone correction of the light source.
Very clever.
6
Reply
@OneBiOzZ
20 hours ago
given the cost of the CCD you think they could have run another PCB for it
9
Reply
@tekvax01
21 hours ago
$20 thousand dollars per minute of run time!
1
Reply
@tekvax01
22 hours ago
"We spared no expense!" John Hammond Jurassic Park.
*(that's why this thing costs the same as a 50-seat Greyhound Bus coach!)
Reply
@florianf4257
22 hours ago
The smearing on the image could be due to the fact that you don't use a shutter, so you see brighter stripes under bright areas of the image as you still iluminate these pixels while the sensor data ist shifted out towards the top. I experienced this effect back at university with a LN-Cooled CCD for Spectroscopy. The stripes disapeared as soon as you used the shutter instead of disabling it in the open position (but fokussing at 100ms integration time and continuous readout with a focal plane shutter isn't much fun).
12
Reply
mikeselectricstuff
·
1 reply
@mikeselectricstuff
12 hours ago
I didn't think of that, but makes sense
2
Reply
@douro20
22 hours ago (edited)
The red LED reminds me of one from Roithner Lasertechnik. I have a Symbol 2D scanner which uses two very bright LEDs from that company, one red and one red-orange. The red-orange is behind a lens which focuses it into an extremely narrow beam.
1
Reply
@RicoElectrico
23 hours ago
PFG is Pulse Flush Gate according to the datasheet.
Reply
@dcallan812
23 hours ago
Very interesting. 2x
Reply
@littleboot_
1 day ago
Cool interesting device
Reply
@dav1dbone
1 day ago
I've stripped large projectors, looks similar, wonder if some of those castings are a magnesium alloy?
Reply
@kevywevvy8833
1 day ago
ironic that some of those Phlatlight modules are used in some of the cheapest disco lights.
1
Reply
1 reply
@bill6255
1 day ago
Great vid - gets right into subject in title, its packed with information, wraps up quickly. Should get a YT award! imho
3
Reply
@JAKOB1977
1 day ago (edited)
The whole sensor module incl. a 5 grand 50mpix sensor for 49 £.. highest bid atm
Though also a limited CCD sensor, but for the right buyer its a steal at these relative low sums.
Architecture Full Frame CCD (Square Pixels)
Total Number of Pixels 8304 (H) × 6220 (V) = 51.6 Mp
Number of Effective Pixels 8208 (H) × 6164 (V) = 50.5 Mp
Number of Active Pixels 8176 (H) × 6132 (V) = 50.1 Mp
Pixel Size 6.0 m (H) × 6.0 m (V)
Active Image Size 49.1 mm (H) × 36.8 mm (V)
61.3 mm (Diagonal),
645 1.1x Optical Format
Aspect Ratio 4:3
Horizontal Outputs 4
Saturation Signal 40.3 ke−
Output Sensitivity 31 V/e−
Quantum Efficiency
KAF−50100−CAA
KAF−50100−AAA
KAF−50100−ABA (with Lens)
22%, 22%, 16% (Peak R, G, B)
25%
62%
Read Noise (f = 18 MHz) 12.5 e−
Dark Signal (T = 60°C) 42 pA/cm2
Dark Current Doubling Temperature 5.7°C
Dynamic Range (f = 18 MHz) 70.2 dB
Estimated Linear Dynamic Range
(f = 18 MHz)
69.3 dB
Charge Transfer Efficiency
Horizontal
Vertical
0.999995
0.999999
Blooming Protection
(4 ms Exposure Time)
800X Saturation Exposure
Maximum Date Rate 18 MHz
Package Ceramic PGA
Cover Glass MAR Coated, 2 Sides or
Clear Glass
Features
• TRUESENSE Transparent Gate Electrode
for High Sensitivity
• Ultra-High Resolution
• Board Dynamic Range
• Low Noise Architecture
• Large Active Imaging Area
Applications
• Digitization
• Mapping/Aerial
• Photography
• Scientific
Thx for the tear down Mike, always a joy
Reply
@martinalooksatthings
1 day ago
15:49 that is some great bodging on of caps, they really didn't want to respin that PCB huh
8
Reply
@RhythmGamer
1 day ago
Was depressed today and then a new mike video dropped and now I’m genuinely happy to get my tear down fix
1
Reply
@dine9093
1 day ago (edited)
Did you transfrom into Mr Blobby for a moment there?
2
Reply
@NickNorton
1 day ago
Thanks Mike. Your videos are always interesting.
5
Reply
@KeritechElectronics
1 day ago
Heavy optics indeed... Spare no expense, cost no object. Splendid build quality. The CCD is a thing of beauty!
1
Reply
@YSoreil
1 day ago
The pricing on that sensor is about right, I looked in to these many years ago when they were still in production since it's the only large sensor you could actually buy. Really cool to see one in the wild.
2
Reply
@snik2pl
1 day ago
That leds look like from led projector
Reply
@vincei4252
1 day ago
TDI = Time Domain Integration ?
1
Reply
@wolpumba4099
1 day ago (edited)
Maybe the camera should not be illuminated during readout.
From the datasheet of the sensor (Onsemi): saturation 40300 electrons, read noise 12.5 electrons per pixel @ 18MHz (quite bad). quantum efficiency 62% (if it has micro lenses), frame rate 1 Hz. lateral overflow drain to prevent blooming protects against 800x (factor increases linearly with exposure time) saturation exposure (32e6 electrons per pixel at 4ms exposure time), microlens has +/- 20 degree acceptance angle
i guess it would be good for astrophotography
4
Reply
@txm100
1 day ago (edited)
Babe wake up a new mikeselectricstuff has dropped!
9
Reply
@vincei4252
1 day ago
That looks like a finger-lakes filter wheel, however, for astronomy they'd never use such a large stepper.
1
Reply
@MRooodddvvv
1 day ago
yaaaaay ! more overcomplicated optical stuff !
4
Reply
1 reply
@NoPegs
1 day ago
He lives!
11
Reply
1 reply
Transcript
0:00
so I've stripped all the bits of the
0:01
optical system so basically we've got
0:03
the uh the camera
0:05
itself which is mounted on this uh very
0:09
complex
0:10
adjustment thing which obviously to set
0:13
you the various tilt and uh alignment
0:15
stuff then there's two of these massive
0:18
lenses I've taken one of these apart I
0:20
think there's something like about eight
0:22
or nine Optical elements in here these
0:25
don't seem to do a great deal in terms
0:26
of electr magnification they're obiously
0:28
just about getting the image to where it
0:29
uh where it needs to be just so that
0:33
goes like that then this Optical block I
0:36
originally thought this was made of some
0:37
s crazy heavy material but it's just
0:39
really the sum of all these Optical bits
0:41
are just ridiculously heavy those lenses
0:43
are about 4 kilos each and then there's
0:45
this very heavy very solid um piece that
0:47
goes in the middle and this is so this
0:49
is the filter wheel assembly with a
0:51
hilariously oversized steper
0:53
motor driving this wheel with these very
0:57
large narrow band filters so we've got
1:00
various different shades of uh
1:03
filters there five Al together that
1:06
one's actually just showing up a silver
1:07
that's actually a a red but fairly low
1:10
transmission orangey red blue green
1:15
there's an excess cover on this side so
1:16
the filters can be accessed and changed
1:19
without taking anything else apart even
1:21
this is like ridiculous it's like solid
1:23
aluminium this is just basically a cover
1:25
the actual wavelengths of these are um
1:27
488 525 570 630 and 700 NM not sure what
1:32
the suffix on that perhaps that's the uh
1:34
the width of the spectral line say these
1:37
are very narrow band filters most of
1:39
them are you very little light through
1:41
so it's still very tight narrow band to
1:43
match the um fluoresence of the dies
1:45
they're using in the biochemical process
1:48
and obviously to reject the light that's
1:49
being fired at it from that Illuminator
1:51
box and then there's a there's a second
1:53
one of these lenses then the actual sort
1:55
of samples below that so uh very serious
1:58
amount of very uh chunky heavy Optics
2:01
okay let's take a look at this light
2:02
source made by company Lumen Dynamics
2:04
who are now part of
2:06
excelitas self-contained unit power
2:08
connector USB and this which one of the
2:11
Cable Bundle said was a TTL interface
2:14
USB wasn't used in uh the fluid
2:17
application output here and I think this
2:19
is an input for um light feedback I
2:21
don't if it's regulated or just a measur
2:23
measurement facility and the uh fiber
2:27
assembly
2:29
Square Inlet there and then there's two
2:32
outputs which have uh lens assemblies
2:35
and this small one which goes back into
2:37
that small Port just Loops out of here
2:40
straight back in So on this side we've
2:42
got the electronics which look pretty
2:44
straightforward we've got a bit of power
2:45
supply stuff over here and we've got
2:48
separate drivers for each wavelength now
2:50
interesting this is clearly been very
2:52
specifically made for this application
2:54
you I was half expecting like say some
2:56
generic drivers that could be used for a
2:58
number of different things but actually
3:00
literally specified the exact wavelength
3:02
on the PCB there is provision here for
3:04
385 NM which isn't populated but this is
3:07
clearly been designed very specifically
3:09
so these four drivers look the same but
3:10
then there's two higher power ones for
3:12
575 and
3:14
520 a slightly bigger heat sink on this
3:16
575 section there a p 24 which is
3:20
providing USB interface USB isolator the
3:23
USB interface just presents as a comport
3:26
I did have a quick look but I didn't
3:27
actually get anything sensible um I did
3:29
dump the Pi code out and there's a few
3:31
you a few sort of commands that you
3:32
could see in text but I didn't actually
3:34
manage to get it working properly I
3:36
found some software for related version
3:38
but it didn't seem to want to talk to it
3:39
but um I say that wasn't used for the
3:41
original application it might be quite
3:42
interesting to get try and get the Run
3:44
hours count out of it and the TTL
3:46
interface looks fairly straightforward
3:48
we've got positions for six opto
3:50
isolators but only five five are
3:52
installed so that corresponds with the
3:54
unused thing so I think this hopefully
3:56
should be as simple as just providing a
3:57
ttrl signal for each color to uh enable
4:00
it a big heat sink here which is there I
4:03
think there's like a big S of metal
4:04
plate through the middle of this that
4:05
all the leads are mounted on the other
4:07
side so this is heat sinking it with a
4:09
air flow from a uh just a fan in here
4:13
obviously don't have the air flow
4:14
anywhere near the Optics so conduction
4:17
cool through to this plate that's then
4:18
uh air cooled got some pots which are
4:21
presumably power
4:22
adjustments okay let's take a look at
4:24
the other side which is uh much more
4:27
interesting see we've got some uh very
4:31
uh neatly Twisted cable assemblies there
4:35
a bunch of leads so we've got one here
4:37
475 up here 430 NM 630 575 and 520
4:44
filters and dcro mirrors a quick way to
4:48
see what's white is if we just shine
4:49
some white light through
4:51
here not sure how it is is to see on the
4:54
camera but shining white light we do
4:55
actually get a bit of red a bit of blue
4:57
some yellow here so the obstacle path
5:00
575 it goes sort of here bounces off
5:03
this mirror and goes out the 520 goes
5:07
sort of down here across here and up
5:09
there 630 goes basically straight
5:13
through
5:15
430 goes across there down there along
5:17
there and the 475 goes down here and
5:20
left this is the light sensing thing
5:22
think here there's just a um I think
5:24
there a photo diode or other sensor
5:26
haven't actually taken that off and
5:28
everything's fixed down to this chunk of
5:31
aluminium which acts as the heat
5:32
spreader that then conducts the heat to
5:33
the back side for the heat
5:35
sink and the actual lead packages all
5:38
look fairly similar except for this one
5:41
on the 575 which looks quite a bit more
5:44
substantial big spay
5:46
Terminals and the interface for this
5:48
turned out to be extremely simple it's
5:50
literally a 5V TTL level to enable each
5:54
color doesn't seem to be any tensity
5:56
control but there are some additional
5:58
pins on that connector that weren't used
5:59
in the through time thing so maybe
6:01
there's some extra lines that control
6:02
that I couldn't find any data on this uh
6:05
unit and the um their current product
6:07
range is quite significantly different
6:09
so we've got the uh blue these
6:13
might may well be saturating the camera
6:16
so they might look a bit weird so that's
6:17
the 430
6:18
blue the 575
6:24
yellow uh
6:26
475 light blue
6:29
the uh 520
6:31
green and the uh 630 red now one
6:36
interesting thing I noticed for the
6:39
575 it's actually it's actually using a
6:42
white lead and then filtering it rather
6:44
than using all the other ones are using
6:46
leads which are the fundamental colors
6:47
but uh this is actually doing white and
6:50
it's a combination of this filter and
6:52
the dichroic mirrors that are turning to
6:55
Yellow if we take the filter out and a
6:57
lot of the a lot of the um blue content
7:00
is going this way the red is going
7:02
straight through these two mirrors so
7:05
this is clearly not reflecting much of
7:08
that so we end up with the yellow coming
7:10
out of uh out of there which is a fairly
7:14
light yellow color which you don't
7:16
really see from high intensity leads so
7:19
that's clearly why they've used the
7:20
white to uh do this power consumption of
7:23
the white is pretty high so going up to
7:25
about 2 and 1 half amps on that color
7:27
whereas most of the other colors are
7:28
only drawing half an amp or so at 24
7:30
volts the uh the green is up to about
7:32
1.2 but say this thing is uh much
7:35
brighter and if you actually run all the
7:38
colors at the same time you get a fairly
7:41
reasonable um looking white coming out
7:43
of it and one thing you might just be
7:45
out to notice is there is some sort
7:46
color banding around here that's not
7:49
getting uh everything s completely
7:51
concentric and I think that's where this
7:53
fiber optic thing comes
7:58
in I'll
8:00
get a couple of Fairly accurately shaped
8:04
very sort of uniform color and looking
8:06
at What's um inside here we've basically
8:09
just got this Square Rod so this is
8:12
clearly yeah the lights just bouncing
8:13
off all the all the various sides to um
8:16
get a nice uniform illumination uh this
8:19
back bit looks like it's all potted so
8:21
nothing I really do to get in there I
8:24
think this is fiber so I have come
8:26
across um cables like this which are
8:27
liquid fill but just looking through the
8:30
end of this it's probably a bit hard to
8:31
see it does look like there fiber ends
8:34
going going on there and so there's this
8:36
feedback thing which is just obviously
8:39
compensating for the any light losses
8:41
through here to get an accurate
8:43
representation of uh the light that's
8:45
been launched out of these two
8:47
fibers and you see uh
8:49
these have got this sort of trapezium
8:54
shape light guides again it's like a
8:56
sort of acrylic or glass light guide
9:00
guess projected just to make the right
9:03
rectangular
9:04
shape and look at this Center assembly
9:07
um the light output doesn't uh change
9:10
whether you feed this in or not so it's
9:11
clear not doing any internal Clos Loop
9:14
control obviously there may well be some
9:16
facility for it to do that but it's not
9:17
being used in this
9:19
application and so this output just
9:21
produces a voltage on the uh outle
9:24
connector proportional to the amount of
9:26
light that's present so there's a little
9:28
diffuser in the back there
9:30
and then there's just some kind of uh
9:33
Optical sensor looks like a
9:35
chip looking at the lead it's a very
9:37
small package on the PCB with this lens
9:40
assembly over the top and these look
9:43
like they're actually on a copper
9:44
Metalized PCB for maximum thermal
9:47
performance and yeah it's a very small
9:49
package looks like it's a ceramic
9:51
package and there's a thermister there
9:53
for temperature monitoring this is the
9:56
475 blue one this is the 520 need to
9:59
Green which is uh rather different OB
10:02
it's a much bigger D with lots of bond
10:04
wise but also this looks like it's using
10:05
a phosphor if I shine a blue light at it
10:08
lights up green so this is actually a
10:10
phosphor conversion green lead which
10:12
I've I've come across before they want
10:15
that specific wavelength so they may be
10:17
easier to tune a phosphor than tune the
10:20
um semiconductor material to get the uh
10:23
right right wavelength from the lead
10:24
directly uh red 630 similar size to the
10:28
blue one or does seem to have a uh a
10:31
lens on top of it there is a sort of red
10:33
coloring to
10:35
the die but that doesn't appear to be
10:38
fluorescent as far as I can
10:39
tell and the white one again a little
10:41
bit different sort of much higher
10:43
current
10:46
connectors a makeer name on that
10:48
connector flot light not sure if that's
10:52
the connector or the lead
10:54
itself and obviously with the phosphor
10:56
and I'd imagine that phosphor may well
10:58
be tuned to get the maximum to the uh 5
11:01
cenm and actually this white one looks
11:04
like a St fairly standard product I just
11:06
found it in Mouse made by luminous
11:09
devices in fact actually I think all
11:11
these are based on various luminous
11:13
devices modules and they're you take
11:17
looks like they taking the nearest
11:18
wavelength and then just using these
11:19
filters to clean it up to get a precise
11:22
uh spectral line out of it so quite a
11:25
nice neat and um extreme
11:30
bright light source uh sure I've got any
11:33
particular use for it so I think this
11:35
might end up on
11:36
eBay but uh very pretty to look out and
11:40
without the uh risk of burning your eyes
11:43
out like you do with lasers so I thought
11:45
it would be interesting to try and
11:46
figure out the runtime of this things
11:48
like this we usually keep some sort
11:49
record of runtime cuz leads degrade over
11:51
time I couldn't get any software to work
11:52
through the USB face but then had a
11:54
thought probably going to be writing the
11:55
runtime periodically to the e s prom so
11:58
I just just scope up that and noticed it
12:00
was doing right every 5 minutes so I
12:02
just ran it for a while periodically
12:04
reading the E squ I just held the pick
12:05
in in reset and um put clip over to read
12:07
the square prom and found it was writing
12:10
one location per color every 5 minutes
12:12
so if one color was on it would write
12:14
that location every 5 minutes and just
12:16
increment it by one so after doing a few
12:18
tests with different colors of different
12:19
time periods it looked extremely
12:21
straightforward it's like a four bite
12:22
count for each color looking at the
12:24
original data that was in it all the
12:26
colors apart from Green were reading
12:28
zero and the green was reading four
12:30
indicating a total 20 minutes run time
12:32
ever if it was turned on run for a short
12:34
time then turned off that might not have
12:36
been counted but even so indicates this
12:37
thing wasn't used a great deal the whole
12:40
s process of doing a run can be several
12:42
hours but it'll only be doing probably
12:43
the Imaging at the end of that so you
12:46
wouldn't expect to be running for a long
12:47
time but say a single color for 20
12:50
minutes over its whole lifetime does
12:52
seem a little bit on the low side okay
12:55
let's look at the camera un fortunately
12:57
I managed to not record any sound when I
12:58
did this it's also a couple of months
13:00
ago so there's going to be a few details
13:02
that I've forgotten so I'm just going to
13:04
dub this over the original footage so um
13:07
take the lid off see this massive great
13:10
heat sink so this is a pel cool camera
13:12
we've got this blower fan producing a
13:14
fair amount of air flow through
13:16
it the connector here there's the ccds
13:19
mounted on the board on the
13:24
right this unplugs so we've got a bit of
13:27
power supply stuff on here
13:29
USB interface I think that's the Cyprus
13:32
microcontroller High speeded USB
13:34
interface there's a zyink spon fpga some
13:40
RAM and there's a couple of ATD
13:42
converters can't quite read what those
13:45
those are but anal
13:47
devices um little bit of bodgery around
13:51
here extra decoupling obviously they
13:53
have having some noise issues this is
13:55
around the ram chip quite a lot of extra
13:57
capacitors been added there
13:59
uh there's a couple of amplifiers prior
14:01
to the HD converter buffers or Andor
14:05
amplifiers taking the CCD
14:08
signal um bit more power spy stuff here
14:11
this is probably all to do with
14:12
generating the various CCD bias voltages
14:14
they uh need quite a lot of exotic
14:18
voltages next board down is just a
14:20
shield and an interconnect
14:24
boardly shielding the power supply stuff
14:26
from some the more sensitive an log
14:28
stuff
14:31
and this is the bottom board which is
14:32
just all power supply
14:34
stuff as you can see tons of capacitors
14:37
or Transformer in
14:42
there and this is the CCD which is a uh
14:47
very impressive thing this is a kf50 100
14:50
originally by true sense then codec
14:53
there ON
14:54
Semiconductor it's 50 megapixels uh the
14:58
only price I could find was this one
15:00
5,000 bucks and the architecture you can
15:03
see there actually two separate halves
15:04
which explains the Dual AZ converters
15:06
and two amplifiers it's literally split
15:08
down the middle and duplicated so it's
15:10
outputting two streams in parallel just
15:13
to keep the bandwidth sensible and it's
15:15
got this amazing um diffraction effects
15:18
it's got micro lenses over the pixel so
15:20
there's there's a bit more Optics going
15:22
on than on a normal
15:25
sensor few more bodges on the CCD board
15:28
including this wire which isn't really
15:29
tacked down very well which is a bit uh
15:32
bit of a mess quite a few bits around
15:34
this board where they've uh tacked
15:36
various bits on which is not super
15:38
impressive looks like CCD drivers on the
15:40
left with those 3 ohm um damping
15:43
resistors on the
15:47
output get a few more little bodges
15:50
around here some of
15:52
the and there's this separator the
15:54
silica gel to keep the moisture down but
15:56
there's this separator that actually
15:58
appears to be cut from piece of
15:59
antistatic
16:04
bag and this sort of thermal block on
16:06
top of this stack of three pel Cola
16:12
modules so as with any Stacks they get
16:16
um larger as they go back towards the
16:18
heat sink because each P's got to not
16:20
only take the heat from the previous but
16:21
also the waste heat which is quite
16:27
significant you see a little temperature
16:29
sensor here that copper block which
16:32
makes contact with the back of the
16:37
CCD and this's the back of the
16:40
pelas this then contacts the heat sink
16:44
on the uh rear there a few thermal pads
16:46
as well for some of the other power
16:47
components on this
16:51
PCB okay I've connected this uh camera
16:54
up I found some drivers on the disc that
16:56
seem to work under Windows 7 couldn't
16:58
get to install under Windows 11 though
17:01
um in the absence of any sort of lens or
17:03
being bothered to the proper amount I've
17:04
just put some f over it and put a little
17:06
pin in there to make a pinhole lens and
17:08
software gives a few options I'm not
17:11
entirely sure what all these are there's
17:12
obviously a clock frequency 22 MHz low
17:15
gain and with PFG no idea what that is
17:19
something something game programmable
17:20
Something game perhaps ver exposure
17:23
types I think focus is just like a
17:25
continuous grab until you tell it to
17:27
stop not entirely sure all these options
17:30
are obviously exposure time uh triggers
17:33
there ex external hardware trigger inut
17:35
you just trigger using a um thing on
17:37
screen so the resolution is 8176 by
17:40
6132 and you can actually bin those
17:42
where you combine multiple pixels to get
17:46
increased gain at the expense of lower
17:48
resolution down this is a 10sec exposure
17:51
obviously of the pin hole it's very uh
17:53
intensitive so we just stand still now
17:56
downloading it there's the uh exposure
17:59
so when it's
18:01
um there's a little status thing down
18:03
here so that tells you the um exposure
18:07
[Applause]
18:09
time it's this is just it
18:15
downloading um it is quite I'm seeing
18:18
quite a lot like smearing I think that I
18:20
don't know whether that's just due to
18:21
pixels overloading or something else I
18:24
mean yeah it's not it's not um out of
18:26
the question that there's something not
18:27
totally right about this camera
18:28
certainly was bodge wise on there um I
18:31
don't I'd imagine a camera like this
18:32
it's got a fairly narrow range of
18:34
intensities that it's happy with I'm not
18:36
going to spend a great deal of time on
18:38
this if you're interested in this camera
18:40
maybe for astronomy or something and
18:42
happy to sort of take the risk of it may
18:44
not be uh perfect I'll um I think I'll
18:47
stick this on eBay along with the
18:48
Illuminator I'll put a link down in the
18:50
description to the listing take your
18:52
chances to grab a bargain so for example
18:54
here we see this vertical streaking so
18:56
I'm not sure how normal that is this is
18:58
on fairly bright scene looking out the
19:02
window if I cut the exposure time down
19:04
on that it's now 1 second
19:07
exposure again most of the image
19:09
disappears again this is looks like it's
19:11
possibly over still overloading here go
19:14
that go down to say say quarter a
19:16
second so again I think there might be
19:19
some Auto gain control going on here um
19:21
this is with the PFG option let's try
19:23
turning that off and see what
19:25
happens so I'm not sure this is actually
19:27
more streaking or which just it's
19:29
cranked up the gain all the dis display
19:31
gray scale to show what um you know the
19:33
range of things that it's captured
19:36
there's one of one of 12 things in the
19:38
software there's um you can see of you
19:40
can't seem to read out the temperature
19:42
of the pelta cooler but you can set the
19:44
temperature and if you said it's a
19:46
different temperature you see the power
19:48
consumption jump up running the cooler
19:50
to get the temperature you requested but
19:52
I can't see anything anywhere that tells
19:54
you whether the cool is at the at the
19:56
temperature other than the power
19:57
consumption going down and there's no
19:59
temperature read out
20:03
here and just some yeah this is just
20:05
sort of very basic software I'm sure
20:07
there's like an API for more
20:09
sophisticated
20:10
applications but so if you know anything
20:12
more about these cameras please um stick
20:14
in the
20:15
comments um incidentally when I was
20:18
editing I didn't notice there was a bent
20:19
pin on the um CCD but I did fix that
20:22
before doing these tests and also
20:24
reactivated the um silica gel desicant
20:26
cuz I noticed it was uh I was getting
20:28
bit of condensation on the window but um
20:31
yeah so a couple of uh interesting but
20:34
maybe not particularly uh useful pieces
20:37
of Kit except for someone that's got a
20:38
very specific use so um I'll stick a
20:42
I'll stick these on eBay put a link in
20:44
the description and say hopefully
20:45
someone could actually make some uh good
20:47
use of these things
Example Output:
**Abstract:**
This video presents Part 2 of a teardown focusing on the optical components of a Fluidigm Polaris biotechnology instrument, specifically the multi-wavelength illuminator and the high-resolution CCD camera.
The Lumen Dynamics illuminator unit is examined in detail, revealing its construction using multiple high-power LEDs (430nm, 475nm, 520nm, 575nm, 630nm) combined via dichroic mirrors and filters. A square fiber optic rod is used to homogenize the light. A notable finding is the use of a phosphor-converted white LED filtered to achieve the 575nm output. The unit features simple TTL activation for each color, conduction cooling, and internal homogenization optics. Analysis of its EEPROM suggests extremely low operational runtime.
The camera module teardown showcases a 50 Megapixel ON Semiconductor KAF-50100 CCD sensor with micro-lenses, cooled by a multi-stage Peltier stack. The control electronics include an FPGA and a USB interface. Significant post-manufacturing modifications ("bodges") are observed on the camera's circuit boards. Basic functional testing using vendor software and a pinhole lens confirms image capture but reveals prominent vertical streaking artifacts, the cause of which remains uncertain (potential overload, readout artifact, or fault).
**Exploring the Fluidigm Polaris: A Detailed Look at its High-End Optics and Camera System**
* **0:00 High-End Optics:** The system utilizes heavy, high-quality lenses and mirrors for precise imaging, weighing around 4 kilos each.
* **0:49 Narrow Band Filters:** A filter wheel with five narrow band filters (488, 525, 570, 630, and 700 nm) ensures accurate fluorescence detection and rejection of excitation light.
* **2:01 Customizable Illumination:** The Lumen Dynamics light source offers five individually controllable LED wavelengths (430, 475, 520, 575, 630 nm) with varying power outputs. The 575nm yellow LED is uniquely achieved using a white LED with filtering.
* **3:45 TTL Control:** The light source is controlled via a simple TTL interface, enabling easy on/off switching for each LED color.
* **12:55 Sophisticated Camera:** The system includes a 50-megapixel Kodak KAI-50100 CCD camera with a Peltier cooling system for reduced noise.
* **14:54 High-Speed Data Transfer:** The camera features dual analog-to-digital converters to manage the high data throughput of the 50-megapixel sensor, which is effectively two 25-megapixel sensors operating in parallel.
* **18:11 Possible Issues:** The video creator noted some potential issues with the camera, including image smearing.
* **18:11 Limited Dynamic Range:** The camera's sensor has a limited dynamic range, making it potentially challenging to capture scenes with a wide range of brightness levels.
* **11:45 Low Runtime:** Internal data suggests the system has seen minimal usage, with only 20 minutes of recorded runtime for the green LED.
* **20:38 Availability on eBay:** Both the illuminator and camera are expected to be listed for sale on eBay.
Here is the real transcript. What would be a good group of people to review this topic? Please summarize provide a summary like they would:
Create
Inside Claude Code With Its Creator Boris Cherny
Y Combinator
2.15M subscribers
Subscribe
1.4K
Share
Save
52,494 views Feb 17, 2026 Lightcone Podcast
A very special guest on this episode of the Lightcone! Boris Cherny, the creator of Claude Code, sits down to share the incredible journey of developing one of the most transformative coding tools of the AI era.
00:00 Intro
01:45 The most surprising moment in the rise of Claude Code
02:38 How Boris came up with the idea for Claude Code
05:38 The elegant simplicity of terminals
07:09 The first use cases
09:00 What’s in Boris’ CLAUDE.md?
11:29 How do you decide the terminal’s verbosity?
15:44 Beginner’s mindset is key as the models improve
18:56 Hyper specialists vs hyper generalists
21:51 The vision for Claude teams
23:48 Subagents
25:12 A world without plan mode?
28:38 Tips for founders to build for the future
30:07 How much life does the terminal still have?
30:57 Advice for dev tool founders
32:11 Claude Code and TypeScript parallels
35:34 Designing for the terminal was hard
37:36 Other advice for builders
40:31 Productivity per engineer
41:36 Why Boris chose to join Anthropic
44:46 How coding will change
46:22 Outro
Apply to Y Combinator: https://www.ycombinator.com/apply
Work at a startup: https://www.ycombinator.com/jobs
How this was made
Auto-dubbed
Audio tracks for some languages were automatically generated. Learn more
Chapters
View all
Explore the podcast
40 episodes
Lightcone Podcast
Y Combinator
Podcasts
Transcript
Follow along using the transcript.
Show transcript
Y Combinator
2.15M subscribers
Videos
About
61 Comments
Add a comment...
@avaquest
1 day ago (edited)
Boris is like the father who made a custom roller coaster in the backyard and Garry is like the child who is constantly excited about it
35
Reply
@bisintifudir
1 day ago (edited)
“I don’t know if it’s useful yet” - legendary taking risk
4
Reply
@CarletonTorpin
1 day ago
Commendations on the success of this video series. I watch these videos straight through, and I'm nearly always smarter by the end of them. I also love the wide shot that shows all participants , including their shoe tapping. So real.
7
Reply
@rufussweeneymd
1 day ago
Claude Code 10x’s me as a doctor. I’ve built several tools, none of which in isolation multiply me, but in aggregate, I’m spending 10x less time with unnecessary clerical tasks, and I can’t put a price on that time with my young family.
45
Reply
2 replies
@litical3719
1 day ago (edited)
I just wanted to say that the idea of building for what the model will be actually links back to the ex CEO of windsurfs ideology when created windsurf. He also said that they started building for where the models were heading in a previous episode of the podcast. Just a cool tid bit
4
Reply
@tomtyiu
1 day ago
I going to use Claude Code for future product release.
2
Reply
@aifakepro
1 day ago
A young project is always looking for something new. I hope we will see a new architecture. I always expect a breakthrough in thinking from successful projects.
1
Reply
@SamsonObiesie
1 day ago
We are all builders now. That is my take away
6
Reply
@citizen_of_earth_
1 day ago (edited)
Step 1. Familiarize yourself with different types of code libraries, SDKs, and or APIs to get ideas of what to build. (I.e. three.js, d3.js, chart.js, iOS SDK, Gemini API, etc.) Step 2. Ask Claude Code to build your idea. Step 3. Dogfood. Step 4. Deploy. You're welcome.
6
Reply
@JeffEnriqueJohnson
4 hours ago
This guy has changed the world
Reply
@punkyduero
5 hours ago
Oops, I’ll upload the optional transcript.
Reply
@neels-u2e
1 day ago
Make sure to think of that tiny thing called security too.
3
Reply
@vargonian
20 hours ago
*Legend of the Red Dragon callout
Reply
@Ayo22210
23 hours ago (edited)
Agentic AI handlers should be native apps downloaded onto the OS and do tasks for us on computers that we ask it to do. Instead of asking a chat bot how to do something and trying to read it and do it ourselves. There should be a separate curser for the agentic AI that is easy to follow, and we can set how fast we see them do the task, slow for learning, fast to just get it done…. I would love to ask it to design something for me in Blender and then I just sit back and I’m like the director of what I want to see happen. I think people will get microphones for their computer so they can talk to their computer, tell it what to do.
Reply
@michalkrsik2702
21 hours ago
Why for life of got we cant just copy files? Only CLI that does not support normal image file pasting.
Reply
@worldwithTiago
10 hours ago
Ending point? It is the starting point.
Reply
@antonpidkuiko6342
1 day ago
goat!
Reply
@AlexPunnen
7 hours ago
Almost feels like the iPhone moment , though the moment has already passed. It is a strangely amazing tool, yesterday it corrected me in a debug session, that is went against my prompt to solve a bug ( almost it solved) and I told it "you are great", though I know it is a model, still it elicits such a response
Reply
@JohnSurvivingAI
9 hours ago
We have to keep renting our ai and data not dystopian at all…
1
Reply
@simbabuilds9338
1 day ago
Claude Code has been significantly better at agentic coding than Cursor since May of 2025. Opus 4.5 was just another step change.
Reply
In this video
Chapters
Transcript
Intro
0:00
At Enthropic, the way that we thought about it is we don't build for the model of today. We build for the model six
0:05
months from now. That's actually like still my advice to to founders that are building on LLM. Just try to think about
0:10
like what is that frontier where the model is not very good at today cuz it's going to get good at it. All of Quad Code has just been written and rewritten
0:16
and rewritten and rewritten over and over and over. There is no part of Quad Code that was around 6 months ago. You try a thing, you give it to users, you
0:22
talk to users, you learn, and then eventually you might end up at a good idea. Sometimes you don't. Are you also in the back of your mind thinking that
0:28
maybe like in 6 months you won't need to prompt that explicitly? Like the model will just be good enough to figure out on its own?
0:34
Maybe in a month, no more need for plan mode in a month. Oh my god.
0:46
Welcome to another episode of the light cone and today we have an extremely special guest, Boris Churnney, the
0:53
creator engineer of Claude Code. Boris, thanks for joining us.
0:58
Thanks for having me. Thanks for creating a thing that has taken away my sleep for about 3 weeks straight.
1:05
I am very addicted to Cloud Code and uh it feels like rocket boosters. Has it
1:10
felt like this for people like for you know months at this point. I think it was like end of November is where uh a
1:16
lot of my friends said like something changed. I remember for me I felt this way when I first created Quad Code and I didn't yet
1:22
know if I was on to something. I kind of felt like I was on to something and then that's when I wasn't sleeping. Yeah.
1:28
And that was just like three straight months. This was uh September 2024. Yeah. It was
1:33
like three straight months. I I didn't take a single day vacation. Worked through the weekends. Worked every single night. I was just like, "Oh my
1:39
god, this is I think this is going to be a thing. I don't know if it's useful yet because it it couldn't actually code
1:44
yet." If you look back on uh those moments to now, like what would be like the most
The most surprising moment in the rise of Claude Code
1:50
surprising thing about this moment right now? It's unbelievable that we're still using a terminal. That was supposed to be the
1:55
starting point. I didn't think that would be the ending point. And then the second one is that it's even useful cuz uh you know at the beginning it didn't
2:02
really write code. Even in February when we G it wrote maybe like 10% of my code or something like that. I didn't really
2:07
use it to write code. it wasn't very good at it. I still wrote most of my code by hand. Uh so the fact that it it
2:13
actually like our bets paid off and it got good at the thing that we thought it was going to get good at because it
2:19
wasn't obvious. At Enthropic, the way that we thought about it is we don't build for the model of today. We build for the model 6 months from now. And
2:27
that's actually like still my advice to to founders that are building on LLM is, you know, just try to think about like
2:32
what is that frontier where the model is not very good at today. um because it's going to get good at it and you just
2:37
have to wait. Going back though, but when do you remember when you first got the idea? Can you just talk us through that? Like was it some like a spark or what was
How Boris came up with the idea for Claude Code
2:44
even the first version of it in your mind? You know, it's funny. It was like it was so accidental that it just kind of evolved into this. Um you know as as
2:52
anthropic I think for Ant the bet has been coding for a long time and the bet
2:57
has been the path to save to safe AGI is through coding and this is this has kind of always been
3:03
the idea and the way you get there is you you teach the model how to code then you teach it how to use tools then you
3:08
teach it how to use computers um and you can kind of see that because the the first team that I joined at Enthropic it
3:14
was called the anthropic labs team uh and it produced three products it was quadcode MCP and in the desktop app. So
3:21
you can kind of see how these like weave together. The particular product that we built, you know, like no one no one
3:27
asked me to build a CLI. Um we kind of knew maybe it was time to build some kind of coding product cuz it
3:33
seemed like the model was ready, but no one had yet really built the product that harnessed this capability. So like
3:39
still there's this insane feeling of product overhang. But at the time it was just like even crazier cuz like no one
3:44
had built this yet. And so I I started like hacking around uh and I was like, "Okay, we build a coding product. What
3:50
do I have to do first? I have to understand how to use the API because I hadn't used anthropic API at that point." Um and so I I just built like a
3:58
little terminal app to use the API. That's all that I did. And it was a little chat app because you know like you think about the you know AI
4:04
applications of the time and you know for non-coders today most what what are most people using is just a chat app. So
4:09
that's what I built. Uh and you know it was in a terminal. I can ask questions. I give answers. Then I think tool use
4:16
came out. I just wanted to try out tool use because I I don't really understand what this is. I was like to use this is cool. Is this actually useful? Probably
4:22
not. Let me just try it. You built it in terminal just because it was the easiest way to get something up and running.
4:27
Yes. Cuz I didn't have to build a UI. Okay. It was just me at that point. It was like the IDEs, Cursor, Windsurf
4:34
taking off. Were you sort of under any pressure or getting lots of suggestions of, hey, like we should build this out
4:40
as a plugin or as a as a fully featured ID itself? There was no pressure because we didn't even know what we wanted to
4:45
build. Like the the team was just in explore mode, you know, like we we didn't we know vaguely we wanted to do something in coding, but it wasn't
4:51
obvious what no one was high confidence enough. That was like my job to figure out. And so I g I gave the model uh the
4:57
batch tool. That was the first tool that that I gave it just cuz I think that was literally the example in our docs. I
5:03
just like took the example. It was in Python. I just ported it to TypeScript because that that's how I wrote it. You know, I didn't know like what the model
5:08
could do with bash. So I asked it to like read a file. It could like cat the file. So like that was cool. And then I
5:13
was like, "Okay, like what can you actually do?" And and I asked her, "What music am I listening to?" He wrote some like Apple script to script my my Mac
5:21
and look up the music in my music player. Oh my god. And this was Sauna 3.5.
5:26
And you know, like I I didn't think the model could do that. And that was my first I think ever fuel the AGI moment
5:33
where I was just like, "Oh my god, the model it just wants to use tools. That's all it wants." That's kind of fascinating. I mean it's
The elegant simplicity of terminals
5:39
very kind of contrarian that clocker works so well in such an elegant simple
5:46
form factor. I mean terminals have been around for a really long time and that seemed to be like a good design
5:53
constraint that allowed a lot of interesting developer experiences like
5:58
it doesn't feel like working. It just feels fun as a developer. I don't think about files where everything is and that
6:05
came by accident almost. Yeah, it was an accident. I remember so after the terminal started to take off
6:11
internally. Um and honestly like after building this thing I think like 2 days after the first prototype I started
6:16
giving it to my team just for dogfooting cuz you know like you know if you come up with an idea and it seems useful the
6:22
first thing you want to do is you want to give it to people to see how they use it. And then I came in the next day and then Robert who sits across from me
6:28
who's another engineer he he just like had quad code on his computer and he was like using it to code. I was like I was
6:34
like what what are you what are you doing? Like this thing isn't ready. It's just a prototype. But yeah, it it was already useful in that form factor. And
6:40
I remember when we did our launch review to kind of launch quad code externally, this was in December, November,
6:47
something like that in 2024. Um Dario asked and he was like, "The us chart
6:52
internally like the the Dow chart is like vertical. Are you like forcing engineers to use it? Like why are you
6:57
mandating them?" And I was just like, "No, no, we didn't. We I just like posted about it and they they' just been like telling each other
7:03
about it." Honestly, it was it was just accidental. We we started with the CLI because it was the cheapest thing and it just kind of stayed there for a bit.
The first use cases
7:09
So in that 2024 period, what how were the engineers using it? Were they sort
7:14
of shipping code with it yet or were they using it in a different way? The model is not very good at coding yet. I I was using it personally for
7:21
automating git. Um I think at this point I I probably forgotten most of my git because cloud code has just been doing
7:28
it for so long. But yeah, like automating uh bash commands that that was a very early use case and like operating like Kubernetes and kind of
7:34
things like this. People were using it for coding. So there were some early signs of this. I think the first use case was actually writing unit tests
7:40
because it's a little bit lower risk and the model was still pretty bad at it but people were were were kind of
7:46
figuring it out and and they were figuring out how to use this thing. Um and one thing that we saw is people
7:51
started writing these markdown files for themselves and then having the model read that markdown file. And this is
7:57
where QuadMD came from. Probably the single for me biggest principle in product is latent demand. Um and the
8:03
just every bit of this product is built through latent demand after their initial CLI. Uh and so quadmd is an
8:09
example of that. There's this other general principle that I think is maybe interesting where you can build for the
8:15
model and then you can build scaffolding around the model in order to improve performance a little bit and depending
8:20
on the domain you can improve performance maybe 10 20% something like that and then essentially the gain is
8:26
wiped out with the next model. So either you can build build the scaffolding and then you know get some performance gain
8:31
and then rebuild it again or you just wait for the next model and then you kind of get it for free. the quantumd
8:36
and kind of the scaffolding is an example of that and really I think that's why we stayed in the CLI is because we felt there is no UI we could
8:44
build that would still be relevant in 6 months because the model was improving so quickly earlier we were saying like we should
8:50
compare cloud MDs but you said something very profound which is you know yours is actually very short which is almost like
8:57
the opposite of what you know people might expect why is that what's in your cloud MD okay so I I checked this before we came
What’s in Boris’ CLAUDE.md?
9:04
so my my cloud has two Um, one is, uh, there it's just two
9:09
lines. So, the first line is whenever you put up a PR, enable automerge. Um, so as soon as someone accepts it, it's
9:15
merged. That's just so I can like code and I don't have to kind of go back and forth with CR or whatever. And then the
9:21
second one is whenever I put up a PR, post it in our internal team stamps channel. Uh, just so someone can stamp
9:26
it and I can get unblocked. Uh, and the idea is every other instruction is in
9:31
our quadmd that's checked into the codebase and it's something our entire team contributes to multiple times a
9:37
week. And very often I'll see someone's PR and they make some like mistake
9:43
that's totally preventable and I'll just literally tag Claude on the PR. I'll just do like add quad, you know, like
9:48
add this to the quad MD and I'll do this, you know, like many times a week. Do you have to like compact the Claude MD? Like I definitely reached a point
9:54
where I got the message at the top saying your cloud MD is like thousands of tokens now. What do you do when you
10:00
guys hit that? So our quadm is actually pretty short. I think it's like couple thousand tokens maybe something like that. Um if if you
10:07
hit this my recommendation would be delete your quadmd and just start fresh. Interesting. I think a lot of people like they try to
10:12
overengineer this right and and really like the capability changes with every model. And so the thing that you want is
10:19
do the minimal possible thing in order to get the model on track. And so if you delete your quadd and then you know the
10:25
model is getting off track, it does the wrong thing. That's when you kind of add back a little bit at a time. And what you're probably going to find is with
10:31
every model, you have to add less and less. For me, I consider myself a pretty average engineer to be honest. Like I
10:36
don't use a lot of fancy tools. Like I I don't use like Vim. I use, you know, VS Code because it's simpler. Um I don't
10:42
really Wait, really? I would have assumed that because you built this in the terminal that you were sort of like a dieh hard ter terminal like Vim Vim only person
10:49
you know screw those VS code people you know well we have people like that on the team there's you know like Adam Wolf for
10:54
example he's he's on the team he's like you will never take Vim for my cold dead hands like yeah so there's definitely a
11:00
lot of people like that on the team and this is one of the things that I learned early on is every engineer likes to hold their dev tools differently they like to
11:06
use different tools there's just no one tool that works for everyone but I think also this is one of the things that makes it possible for quad code to be so
11:13
good because I kind of think about it as what is the product that I would use that makes sense to me and so to use
11:19
quad code you don't have to understand Vim you don't have to understand TMX you don't have to know how to like SSH you don't have to know all the stuff you
11:25
just have to open up the tool and it'll guide you it'll it'll do all this stuff how do you decide how verbose you want
How do you decide the terminal’s verbosity?
11:30
like sort of the terminal to be like sometimes you have to go you know control O and check it out and is it
11:36
like internal bike shed battles around like longer shorter I mean every user
11:42
probably has a for an opinion like how do you make those sorts of decisions? What What's your opinion? Is it is it
11:47
too verbose right now? Oh, I love the verbosity cuz basically sometimes it just like goes off the deep
11:52
end and I'm watching and then I can just read very quickly and it's like, "Oh, no, no, it's not that." And then I
11:57
escape and then just stop it and then it just like stops an entire bug farm like as it's happening. I mean, that's
12:03
usually when I didn't do plan mode properly. This is something that we probably change pretty often. Um, I remember
12:09
early on, this is maybe six months ago, I tried to get rid of bash output just internally just to like summarize it because I was like these giant long bash
12:16
commands, I don't actually care. And then I gave it to anthropic employees for a day and everyone just revolted.
12:22
I want to see my dash because it it actually is quite useful for, you know, like for something like git output,
12:27
maybe it's not useful, but if you're running, you know, like Kubernetes jobs or something like this, you actually do want to see it. We recently hit the hid
12:33
the file reads and uh file searches. So you'll notice instead of saying, you know, like read food.md said, you know,
12:40
like read one file, search searched one pattern. And this is something I think we could not have shipped six months ago
12:45
because the model just was not ready. It would have, you know, it still read the wrong thing pretty often. As a user, you still had to be there and kind of catch
12:51
it and debug it. But nowadays, I just noticed it's on the right track almost every time. And because it's using tools
12:57
so much, it's actually a lot better just to summarize it. Um, but then we shipped it. Uh, we dog fooded it for like a
13:03
month and then people on GitHub didn't like it. Uh so there was a big issue where people like no like I want to see the details and that was really great
13:09
feedback. Um and so we added a new verbose mode and so that's just like in slash config you can enable verbose mode
13:15
and if you want to see all the file outputs you can continue to do that and then I posted on the issue and people
13:20
still still didn't like it which is again awesome because like my favorite thing in the world is just hearing people's feedback and hearing how they
13:26
actually want to use it. Um and so we just like iterated more and more and more to get that really good and to make it the thing that people want. I'm
13:32
amazed like how much I enjoy uh fixing bugs now. And then all you have to do is uh have really good logging and then
13:40
even just say like hey check out that you know this particular object it messed up in this way and it like
13:46
searches the log. It figures everything out. It can like go into your you can make a production tunnel and it'll look at your production DB for you. It's like
13:53
this is insane. Bug fixing is just going to sentry copy markdown. You know pretty soon it's just going to be straight MCP.
13:59
It's like an autobug fixing like and test making sort of uh what's the new uh
14:05
term they call it like a making a startup factory. Oh yeah. Right. There's like all these concepts
14:10
now of rather than having to review the code, you know, I'm I'm old school, so I
14:16
like the verbosity. I like to say, "Oh, well, you're doing this, but I want you to do that." Right? But there's a
14:21
totally different school of thought now that says like anytime an a real human being has to look at code uh that's bad.
14:29
Yeah. Yeah. Yeah. Which is fascinating. I think like Dan Chipper talks about this a lot as kind of when whenever you
14:35
see the model make a mistake try to put in the quadmd try to put it in like skills or something like that so it's reusable. But I I think there's this
14:42
meta point that I actually struggle with a lot. And I people talk about like agents can do this, agents can do that,
14:47
but actually what agents can do, it changes with every single model. And so sometimes there's a new person that
14:52
joins the team and they actually use quad code more than I would have used it. And I'm just constantly surprised by
14:58
this. Like for example, there was a we had like a memory leak and we were trying to debug it. Um and by the way,
15:04
like Jared Sumar has just been on this crusade killing all the memory leaks and it's just been amazing. But before Jared
15:09
was on the team, I had to do this and there was this memory leak. I I was trying to debug it. And so I I took a
15:14
heap dump. I opened it in DevTools. I was looking through the profile. Then I was looking through the code and I I was just trying to figure this out. And then
15:20
another engineer on the team, Chris, he just like asked Quad Code. He was like,
15:25
"Hey, I think there's a memory leak. Can you like run this?" And then like try to figure it out. And Quad Code like took the heap dump. It wrote a little tool
15:30
for itself to like analyze the heap dump. And then it found the leak faster than I did. And this is just something I
15:37
have to constantly relearn because my brain is still stuck somewhere six months ago at times.
Beginner’s mindset is key as the models improve
15:44
So what would be some advice for technical founders to really become
15:49
maximalists at the latest model release? It sounds like people off of fresh off of school or that don't have any
15:56
assumptions might be better suited than maybe sometimes engineers who have been working at it for a long time. And how
16:02
do the experts get better? I think for yourself it's kind of beginner mindset
16:07
and uh I don't know maybe just like humility like I feel like engineers as a
16:12
discipline we've learned to have very strong opinions and senior engineers are kind of rewarded for this in my old job
16:18
at a big company when I hired like architects and this kind of a type of engineer you look for people that have a lot of experience and really strong
16:24
opinions but it actually turns out a lot of this stuff just isn't relevant anymore and a lot of these opinions
16:29
should change because the model is getting better um so I think actually the biggest skill is people that can
16:35
think scientifically and can just think from first principles. How do you screen for that when you try to hire someone now for for your team?
16:42
I sometimes ask about what's an example of when you're wrong. It's a really good one. You know, some of these like classic behavioral questions like not
16:48
even coding questions I think are quite useful because you can see if people can recognize their mistake in hindsight, if
16:53
they can claim credit for the mistake and if they learn something from it. And I think a lot of these like very senior
16:59
people especially there there are some founder types like this but I think founders in particular are actually quite good at it. Um but other people
17:06
sometimes will never really take uh they'll never take the blame for a mistake. But I don't know like for me
17:12
personally I'm wrong probably half the time. Like half my ideas are bad and you just have to try stuff and you know you
17:18
try a thing you give it to users you talk to users you learn and then eventually you might end up at a good idea. Sometimes you don't. And this is
17:25
the skill that I think in in the past was very important for founders, but now I think it's very important for every
17:31
engineer. Do you think um you would ever hire someone based on the uh claude code
17:37
transcript of uh them working with the agent cuz we're actively doing that right now. We just added uh just as a
17:44
test like you can upload a transcript of you coding a feature with cloud code or
17:50
codeex or whatever it is. Personally, I think that like it's going to work. I mean, you can figure out uh how someone
17:56
thinks, like whether they're looking at the logs or not, like can they correct the agent if it goes off off the rails?
18:02
Like, does do they use plan mode? You know, when they use plan mode, do they make sure that there are tests or you
18:08
know, all of these different things that, you know, do they think about systems? Do they even understand systems? Like, there's just so much that's sort of
18:14
embedded in that that I imagine. I just want like a spider uh a spiderweb graph, you know, like in those video games like
18:21
NBA 2K. It's like, oh, this person's really good at shooting or defense. It's like you could imagine a spiderweb graph
18:26
of like, you know, someone's claude code skill level. Yeah. What would what would the skills be? What would be those?
18:32
I mean, I think it's like systems testing must be like user behavior. I mean, there's got to be a design part
18:39
like product sense maybe also just like automating stuff. Mhm. My favorite thing in CloudMD uh for me is I have a thing
18:46
that says for every plan decide whether it's overengineered, underengineered, or perfectly engineered and why.
18:53
I think this is something that we're trying to figure out, too, cuz I I think uh when I look at engineers on the team that I think are the most effective,
Hyper specialists vs hyper generalists
18:59
there's essentially two, it's very biodal. Um there's one side where it's extreme specialists. Um and so like I
19:05
named Jared before, like he's a really good example of this and kind of the bun team is a really good example. Just hyper specialist. They understand dev
19:11
tools better than anyone else. They understand JavaScript runtime systems better than anyone else. And then
19:16
there's the flip side of kind of hyper generalists and that's kind of the rest of the team. And a lot of people they span like product and info or product
19:23
and design um or you know like product and user research, product and business. I really like to see people that just do
19:30
weird stuff. I think that's one of these things that was kind of a warning sign in the past because it's like can these
19:35
people actually build something useful? Um that's the limits test. Yeah, that's what must but but nowadays like for
19:42
example an engineer on the team Daisy, she was on a different team and then she transferred onto our team and the reason
19:47
that I wanted her to transfer is she put up a PR for Claude Code like a couple
19:53
weeks after she joined or something and the PR was to add a new feature to Claude Code and then instead of just
19:59
adding the feature what she did is first she put up a PR to give Claude code a tool so that it can test an arbitrary
20:05
tool and verify that that works. And then she put up that PR and then she had Quad write its own tool instead of
20:11
herself implementing it. And I think it's this kind of out of the box thinking that is is just so interesting because not a lot of people get it yet.
20:17
You know, like we use the Quad agents SDK to automate pretty much every part of development. It automates code
20:23
review, security review. Uh it labels all of our issues. It shephards things to production. It does pretty much
20:29
everything for us. But I think externally I'm seeing a lot of people start to figure this out, but it's
20:34
actually taken a while to figure out how do you use LMS in this way? How do you use this new kind of automation? So it's
20:40
kind of a new skill. I guess one of the uh funnier things that I've been having office hours with various founders about is um you have
20:46
like sort of the visionary founder who has like the idea they've like built this like crystal palace of the product
20:53
that they want to build. they've totally loaded in their brain, you know, who the user is and what they feel and what
20:58
they're motivated by and then they're sitting in claude code and they can do like, you know, 50x work and then but
21:06
they have engineers who work for them who like don't have the, you know, crystal memory palace of like the
21:11
platonic ideal of the product that the pro founder has and they can only do like 5x work. Are you hearing stories
21:17
like that? there's usually a person who's like the core like designer of a
21:23
thing and they're just like, you know, trying to blast it out of their brain. What's the nature of like teams like
21:30
that? You know, it seems like that's almost a stable configuration. Like you're going to have the visionary who
21:35
like now is unleashed, but you know, maybe going back to the top of it, like I'm experiencing this right now. I was
21:41
like, "Oh, well, I'm only a solo person and you know, I need to eat and sleep
21:46
and I have, you know, a whole job. It's like, how am I going to do this?" You know, you know, like we just launched quad
The vision for Claude teams
21:52
teams and, you know, this is a way to do it, but you can also just build your own way to do it. It's pretty easy. What's the vision for cloud teams?
21:59
Just collaboration. It's like there's this whole new field of like agent top apologies that people are exploring. Like what are the ways that you can
22:05
configure agents? There's this one sub idea which is uncorrelated context windows. And the idea is just multiple
22:10
agents, they have fresh context windows that aren't essentially polluted with each other's context or their own previous context. And if you throw more
22:17
context at a problem, that's like a form of test time compute. Um, and so you just get more capability that way. And
22:22
then if you have the right topology on top of it, so the agents can communicate in the right way, they're laid out in the right way, then they can just build
22:29
bigger stuff. And so Teams is kind of like one idea. There's a few more that are coming pretty soon. Um, and the idea
22:35
is just maybe it can build a little bit more. I think the first kind of big example where it worked is our plugins
22:41
feature was entirely built by a swarm over over a weekend. It just ran for like a few days. There wasn't really
22:47
human intervention. And plugins is pretty much in the form that it was when when it came out. How did you set that up? Like did you
22:54
spec out sort of the outcome that you were hoping for and then let it sort of
22:59
figure out the details and then like let it run? Yeah. an engineer on the team just gave
23:04
uh gave Quad a spec and um told Quad to use a Asauna board and then Quad just
23:11
put up a bunch of tickets on a sauna and then spawned a bunch of agents and the agent started picking up tasks. The main
23:16
quad just gave it instructions and they all just figured it out like independent um agents that didn't
23:21
have the context of the bigger spec. Right. Right. If you if you think about the way that uh you know like how our agents
23:27
actually started nowadays and you know I haven't pulled the data on this but I would bet the majority of agents are actually prompted by quad today in the
23:34
form of uh sub agents cuz like a sub agent is just like a recursive quad code that's all it is in the code and it's
23:40
just prompted by we call her mama quad and that that's all it is and I think probably if you look at most agents
23:46
they're launched in this way my claude insights just told me to do this more for debugging so that I get
Subagents
23:51
like I spend a lot of time on debugging And it would just be better to have like multiple sub agents spin up and like
23:57
debug something in parallel. And so then I just like added that to my claude MD to just be like, hey, like next time you try and fix a bug like have one agent
24:04
that like looks in the log, like one that looks in the code path. That just seems sort of inevitable. For weird scary bugs, I try to uh fix
24:11
bugs in plan mode and then it seems to use the agents to sort of search everything. Whereas like when you're
24:17
just trying to do it in line, it's like, okay, I'm going to do like this one task instead of search wide. This is
24:23
something I do all the time too. I I just say if the if the test seems kind of hard, this kind of research test, I'll calibrate the number of sub aents I
24:29
ask it to use based on the difficulty of the task. So if it's like really hard, I'll say like use three or maybe five or even 10
24:35
sub aents, research in parallel and then see what they come up with. I'm curious. So then why don't you put that in your clawed MD file?
24:42
It's kind of case by case, you know, like quadm like what is it? It's just a it's a shortcut. Like if you find
24:48
yourself repeating the same thing over and over, you put in the quad MD. But otherwise, you don't have to put everything there. You can just prompt
24:54
quad. Are you also in the back of your mind thinking that maybe like in six months, you won't need to prompt that
24:59
explicitly? Like the model will just be good enough to figure out on its own. Maybe in a month.
25:05
No more need for plan mode in a month. Oh my god. I think plan mode probably has a limited lifespan.
25:10
Interesting. That's some alpha for everyone here. What would the world look like without plan mode? Do you just describe it at
A world without plan mode?
25:16
the prompt level and it would just do it? One shot it? Yeah, we've uh we've started experimenting with this because quad code can now enter plan mode by
25:23
itself. I don't know if you've you guys have seen that. Yeah. So, we're trying to kind of get this experience really good. So, it would
25:30
enter plan mode the same point where a human would have wanted to enter it. So, I think it's like I think it's something like this, but actually plan mode
25:37
there's no there's no big secret to it. All it does is it adds one sentence to the prompt that's like please don't
25:42
code. That's all it is. You can you can actually just say that. Yeah. So it sounds like a lot of the feature development for clock code is
25:48
very much a what we talk about a YC talk to your users and then you come and implemented it. It
25:54
wasn't the other way that you had this master plan and then implemented all the features. Yeah. Yeah. I mean that that's all it was like plan mode was we saw users that
26:01
that were like hey quad come up with an idea plan this out but don't write any code yet. And there was kind of various
26:06
versions of this. Sometimes it was just talking through an idea. Sometimes it was these very sophisticated specs that that they were asking Claude to write,
26:13
but the common dimension was do a thing without coding yet. And so literally like this was like Sunday night at 10
26:19
p.m. I was I was just like looking at GitHub issues and kind of seeing what people were talking about and looking at our internal Slack feedback channel and
26:26
I just wrote this thing in like 30 minutes and then uh shipped it that night. It went out Monday morning. That was plan mode. So do you mean that there
26:32
will be no need for plan mode to in the sense of I'm worried that the model's going to do like it's going to do like
26:38
the wrong thing or head off in the wrong direction but there will still be a need for that. You need to think through the
26:43
idea and figure out exactly what it is that you want and you have to do that somewhere. I kind of think about it in terms of
26:49
like kind of increasing model capabilities. So maybe 6 months ago a plan was insufficient. So you get Claude
26:55
to make a plan. Let's say even with plan mode you still have to kind of sit there and babysit cuz it can go off track. Nowadays what I do is probably 80% of my
27:02
sessions I say I say plan mode has a limited lifespan but I I'm a heavy plan mode user. Um I probably 80% of my
27:08
sessions I start in plan mode and claude will you know it'll start it'll start making a plan. I'll move on to my second
27:14
terminal tab and then I'll have it make another plan and then when I run out of tabs I open the desktop app and then I
27:20
go to the code tab and then I just start a bunch of tabs there and they all start in plan mode probably know like 80% of the time. Once the plan is good, and
27:26
sometimes it takes a little back and forth, they just get clawed to execute. And nowadays, what I find with Opus 4.5,
27:33
I think it started with 4.6 it got really good. Once the plan is good, it just stays on track and it'll just do
27:39
the thing exactly right almost every time. And so, you know, before you had to babysit after the plan and before the
27:44
plan, now it's just before the plan. So, maybe the next thing is you just won't have to babysit. You can just kind of
27:49
give a prompt and Quad will figure it out. The next step is Claude just speaks to your users directly.
27:56
Yeah, it just bypasses you entirely. It's funny. This is actually the current stuff for us. Our quads actually like they talk to each other. They talk to
28:02
our users on Slack, at least internally pretty often. Um, my quad will like tweet once in a while.
28:08
No way. Um, but I actually like delete it. It's just like it's a little like cheesy. Like I don't love the tone.
28:14
What does it want to tweet about? Sometimes it'll just like respond to someone cuz I always have like co-work
28:19
in the background and it's like it's the co-work that really loves to do that because it likes using a browser. That's funny. A a really common pattern
28:25
is I ask Quad to build something. It'll look in the codebase. Uh it'll see some engineer touch something in the git
28:30
flame and then it'll message that engineer on Slack. Um just like asking a clarifying question and then once it
28:36
gets answer back, it'll keep going. What are some tips for founders now on how to build for the future? Sounds like
Tips for founders to build for the future
28:42
everything is really changing. What are like some principles that will stay on and what will change?
28:48
So I think some of these are pretty are pretty basic, but I think they're even more important now than they were before. Um, so one example is latent
28:55
demand. Like I mentioned it a thousand times for me. It's just like the single biggest idea in product. It's a it's a
29:00
thing that no one understands. It's a thing I certainly did not understand my first few startups. And and the idea is
29:06
like people will only do a thing that they already do. You can't get people to do a new thing. If people are trying to do a thing and you make it easier,
29:13
that's a good idea. But if if people are doing a thing and you try to make them do a different thing, they're not going to do that. And so you just have to make
29:19
the thing that they're trying to do easier. And I think quad is going to get increasingly good at kind of figuring out these kind of product ideas for you
29:25
just because it can look at feedback, it can look at debug logs, it can kind of figure this out. That's what you mean by plan mode was latent demand that people were already
29:32
like I don't know had their clawed chat window open in a browser and were like talking to it to figure out like the
29:38
spec and and what it should do. And now that like pi mode just became that you
29:43
just do it in claw code. Yeah. Yeah, that's it. Some sometimes what I'll do is I'll just walk around the office on on our floor and I'll just
29:49
kind of stand behind people like I I'll say like hi so it's not and then um I'll I'll just see kind of like how they're
29:55
using quad code. Um and this is also just something I saw a lot um but it also came up in GitHub issues like
30:00
people were talking about it. It seems like so you're surprised how far the terminal has gone and how far it's been
30:06
pushed like how far do you think it has left to go just given with this world of
How much life does the terminal still have?
30:11
swore multiple agents like do you think there's going to be a new a need for a
30:16
different UI on top of it? It's funny if you asked me this a year ago I would have said the terminal has like a threemonth lifespan and then
30:21
we're going to move on to the next thing. Uh and you can see us experimenting with this right because quad code started in a terminal but now
30:27
it's in you know it's on web you can like quadcode it's in the desktop app you know we've had that for you know
30:33
like three months or six months or something just in the code tab um it's in the iOS and Android apps just like in
30:38
the code tab it's in slack it's in GitHub there's VS Code extensions there's Jet Brains extensions so we're
30:45
just like we're always experimenting with different form factors for this thing to figure out what's the next
30:50
thing I've been wrong so far about the of the CLI. So, I'm probably not the person to forecast that.
30:56
What about like your advice to DevTool founders? Like, someone's building a DevTool company today. Should they just
Advice for dev tool founders
31:02
like be building for engineers and humans or should they be thinking more about like what Claude going to think
31:08
and want and build for sort of like the agent? The way I would frame it is think about the thing that the model wants to do and
31:16
figure out how do you make that easier. And that's something that we saw, you know, like when I first started hacking
31:22
on quad code, I I realized like this thing just wants to use tools. It just wants to interact with the world. And
31:27
how how do you how do you enable that? Well, the way you don't do it is you put it in a box and you're like, here's the
31:33
API, here's how you interact with me, and here's how you interact with the world. The way you do it is you see what tools it wants to use. You see what it's
31:39
trying to do, and you enable that the same way that you do for your users. And so, like for if you're building a dev
31:44
tool startup, I would think about like what is the problem you want to solve for the user? And then when you use when you apply the model to solving this
31:50
problem, what is the thing the model wants to do? And then what is the technical and product solution that serves the weight and demand of both? YC's next batch is
31:58
now taking applications. Got a startup in you? Apply at y combinator.com/apply.
32:04
It's never too early and filling out the app will level up your idea. Okay, back
32:09
to the video. Back in the day, more than 10 years ago, you were a very heav heavy
Claude Code and TypeScript parallels
32:15
user and you wrote a book about TypeScript, right? Before Typescript was
32:20
cool. This is when everyone was a deep in JavaScript. This is back in early 2010s, right?
32:25
Yeah, something like that. Before Typescript was a thing because back then is a very weird language. It's
32:32
not supposed to do a lot of things with being typed in JavaScript and now it's
32:37
the right thing and it feels like clot code in the terminal has a lot of parallels with TypeScript at the
32:43
beginning. TypeScript makes a lot of really weird language decisions. So if you look at
32:49
the type system pretty much anything can be a literal type for example and this is like this is super weird cuz like
32:55
even like like Haskell doesn't even do this. It's just like it's too extreme or it has like conditional types which I
33:02
don't think any language thought of at all. It was like very strongly typed. Yeah, it was very strongly and and the
33:08
idea was like when you know like when Joe Pamer and Anders and the early team was like building this thing, the way
33:14
they built it is we okay, we have these teams with these big untyped JavaScript code bases. We have to get types in
33:19
there, but we're not going to get engineers to change that the way that they code. You're not going to get JavaScript people to have like, you
33:25
know, 15 layers of class inheritance like you would a Java programmer, right? They're going to write code the way they're going to write it. They're
33:30
they're going to use reflection and they're going to use mutation and they're going to use all these features that traditionally are very very
33:35
difficult to type. They're a very unsafe type to any strong functional programmer. That's right. That's right. That's
33:41
right. And so the thing that they did instead of getting people to kind of change the way that they code, they they built a type system around this. And it
33:48
was just it's brilliant because there's all these ideas that no one was thinking about even in academia like no one
33:53
thought of a bunch of these ideas. It purely came out of the practice of observing people and seeing how JavaScript programmers want to write
33:59
code. And so you know for for quad code it there there are some ideas that are kind of similar in that you know like
34:05
you can use it like a Unix utility. You can pipe into it. You can pipe out of it. Um in some ways it is kind of
34:10
rigorous in this way but in in almost every other way it's just the tool that we wanted. like I I build a tool for
34:17
myself and then the team builds the tool for themselves and then for anthropic employees and then for users and it just
34:23
ends up being really useful. It's not it's not this like principled and academic thing which I think the the
34:29
proof is actually in the results. Now fast forward more than 15 years later
34:34
not many codebases are in Haskell which is more academic and there's tons of them now on TypeScript because it's way
34:40
more practical right which is interesting. Yeah, it is interesting, right? It's like TypeScript solves a problem.
34:45
I guess one thing that's cool, I don't know how many people know, but the terminal is actually one of the most beautiful terminal apps out there and is
34:54
actually written with React terminal. When I first started building it, you know, like I I did front-end engineering
35:00
for for a while. So, and I was also like a, you know, I'm I'm sort of like a hybrid, like I do like design and user
35:06
research and, you know, write code and all this stuff. And we love hiring engineers that are like this. Um, so we
35:11
just we love generalists. So for me it's like okay, I'm building a thing for the terminal. I'm actually kind of a shitty Vim user. So like how do I build a thing
35:18
for people like me that um you know are are going to be working in a terminal. And I think just the delight is so
35:24
important. And I feel like at YC this is something you talk about a lot, right? It's like build a thing that people love. If the product is useful but you
35:30
don't fall in love with it, that's not great. Um so it kind of has to do both. Designing for the terminal honestly has
Designing for the terminal was hard
35:35
been hard, right? It's like uh it's like 80 by 100 characters or whatever. you have like 256 colors, you have one font
35:41
size, you don't have like mouse interactions, there's all this stuff you can't do, and there's all these very hard trade-offs. So, like a little known
35:48
thing, for example, is you can actually enable mouse interactions in a terminal. So, you can enable like clicking and
35:53
stuff. Oh, how do you do that in cloud code? I've been trying to figure out how to do this. We don't we don't have it in cloud code because we actually prototyped it a few
35:59
times and it felt really bad because the trade-off is you have to virtualize scrolling and so there's all these weird
36:04
trade-offs because like the way terminals work is like there's no DOM, right? It's like there's like anti- escape codes and these kind of weird
36:11
organically evolved specs since like the 1960s or whatever. Yeah. It feels like BBS's. It's like a
36:16
BBS door game. Yeah. Oh my god. That's like that's like a great compliment. Yeah. Yeah. Like it should feel like you're discovering
36:22
Lord of the Red Dragon. It's fantastic. Oh my god. Yeah. But we have we've had to just like discover all these kind of UX principles
36:28
for building the terminal cuz no one really writes about this stuff. And if you look at the big terminal apps of,
36:34
you know, like the 80s or 90s or 2000s or whatever, they use like ed curses and they have all these like windows and
36:39
things like this. And it just looks kind of like janky by modern standards. It just looks too heavy and complicated.
36:44
And so we had to like reinvent a lot. And you know, for example, something like the terminal spinner, like just like the spinner words, it's gone
36:50
through probably I want to say like 50 maybe 100 iterations at this point. And
36:56
probably 80% of those didn't ship. So we tried it, it didn't feel good, move on to the next one. try it, didn't feel
37:01
good, move on to the next one. Uh, and this was like sort of one of the amazing things about quad code, right? Is like
37:06
you can write these prototypes and you can just do like 20 prototypes back to back, see which one you like, and then ship that and the whole thing takes
37:12
maybe a couple hours. Whereas in the past, what you would have had to do is like wen to use origami or framer or something like this. You built
37:18
like maybe three prototypes, it took like two weeks. It just took much much longer. And so we have this luxury of we have to
37:24
discover this new thing. We have to build a thing. We don't know what the right endpoint is, but we can iterate there so quickly and that's what makes
37:31
it really easy and that's what lets us build a product that's like joyous and that people like to use. Boris, you had other advice for for
Other advice for builders
37:38
builders and we kept interrupting you because we have so many questions, but I would say um so okay, so maybe two
37:45
pieces of advice that are kind of weird because it's like about building for the model. So one is uh don't build for the
37:52
model of today, build for the model of 6 months from now. This is like sort of weird, right? Because like you can't find PMF if the product doesn't work.
37:58
But actually this is the thing that you should do because otherwise what will happen is you spend a bunch of work you find PMF for the product right now and
38:04
then you're just going to get leaprogged by someone else um because they're building for the next model and a new model comes out every few months. Use
38:10
the model, feel out the boundary of what it can do and then build for the model that you think will be the model maybe 6 months from now. I think the second
38:16
thing is um you know actually in the in the quad code where in the quad code area where we sit we have a framed copy
38:22
of the bitter lesson on the wall. Um and this is this like rich sutton uh I like everyone should read it if if you
38:28
haven't uh and the idea is the more general model will always be the more specific model and there's a lot of
38:35
corlaries to this but essentially what it boils down to is never bet against the model. Uh, and so this is just like
38:41
a thing to that that we always think about where we could build a feature into cloud code. We could make it better
38:47
as a product and we call this scaffolding. That's all this code that's not the model itself. But we could also just wait like a couple months and the
38:53
model can probably just do the thing instead. Um, and there's always this trade-off, right? It's like engineering
38:58
work now and you can kind of extend the capability a little bit, maybe 10 20% or whatever in whatever domain on this
39:04
like, you know, like the spider chart of what you're trying to extend. Um, or you can just wait and the next model will do it. So just always always think in terms
39:10
of this trade-off where where do you actually want to invest and assume that whatever the scaffolding is it's just
39:15
tech. How often do you rewrite the code ways of uh clock code is every six months
39:20
with this with this is there scaffolding that you've deleted because you don't need it anymore because the model just improved.
39:26
Oh so much. Yeah. Like all of quad code has just been written and rewritten and rewritten and rewritten over and over and over. We unhip tools every couple
39:34
weeks. We add new tools every couple weeks. There's no part of quad code that was around six months ago. It's just
39:40
constantly rewritten. Would you say most of the code base for current cloud code is only say 80% of it
39:46
is only less than a couple months old. Yeah, definitely. It might it might even be like less than Yeah, maybe like a
39:51
couple months. That that feels about right. So it's like the life cycle of code now. That's another alpha is expecting it to
39:56
be the shelf life to be just couple months. Yeah. For the best founders. Do you see uh Steve Yaggi's uh post
40:02
about how awesome working at Anthropic is? And I think there's a line in there that says that an anthropic engineer uh
40:09
currently averages 1,000x more productivity than a Google engineer at
40:15
Google's peak which is really an insane number honestly like 1,000x like you
40:22
know we're 3 years ago we were still talking about 10x engineers now we're talking about 1000x on top of a Google
40:27
engineer in the prime like this is unbelievable honestly. Yeah, I mean internally if you if you look at like
Productivity per engineer
40:32
technical employees, they all use quad code every day. Um, and even non-technical employees, I think like
40:37
half the sales team uses quad code. Um, they they've started switching to co-work because it's a little easier to use. It has like a VM, so it's a little
40:44
bit safer. But yeah, we actually we just pulled a stat and the I think the team doubled in size last year, but
40:50
productivity per engineer grew something like 70%. It's measured by just like the simplest stupidest
40:56
measure, pull requests. Um, but we also kind of cross check that against like commits and like uh the lifetime of commits and things like this. And since
41:02
quad code came out, productivity per engineer at anthropic has grown 150%. Oh my god. Um, and this is crazy because I in my
41:10
old life I was responsible for code quality at Meta. Um, and I was responsible for the quality of all of our code bases across
41:16
every product across like you know Facebook, Instagram, WhatsApp, whatever. And one of the things that the team
41:22
worked on was improving productivity. And back then seeing a gain of something like 2% in productivity that was like a
41:28
year of work by hundreds of people. And so this like 100% this is just like unheard of just completely unheard of.
41:35
What drove you to come over to Anthropic? I mean basically as a builder you could go anywhere. What was the
Why Boris chose to join Anthropic
41:40
moment that made you say like actually this is the set of people or this is the approach. I was living in rural Japan
41:47
and I was opening up Hacker News every morning and I was reading the news and
41:52
uh it was all it just started to be like AI stuff at some point and uh I started to use some of these early products and
41:59
uh I remember like the first couple times that I used it I was just like it just took my breath away. That was like
42:04
very cheesy to say, but that was actually that was actually the feeling. Like it was just like it was amazing like as a as as a builder, I've just
42:11
never kind of felt felt this feeling like using these very very early products. That was like in the quad 2 days or you know something like that.
42:17
And so I I just talking started talking to friends at Labs um just to kind of see what was going on. Um and uh I met
42:25
Ben man who's one of the founders at uh at Anthropic and uh he just immediately won me over. Um and as soon as I met
42:32
kind of the rest of the team at an it just won me over and I think I think probably in two ways. So one is it operates as a research lab. Um so the
42:40
product was teeny teeny tiny. It's really all about building a safe model. That's all that matters. Um and so this
42:46
idea of just being very close to the model and being very close to development and being not the most important thing because the product
42:52
isn't anymore. It's just the model is the thing that's the most important. Um that really resonated with me after
42:58
building product for many years. And then the second thing was just how missiondriven it is. Um like I'm I'm a
43:04
huge sci-fi reader. My bookshelf is just like filled with sci-fi. And so like I just know how bad this can go.
43:10
And when I kind of think about what's going to happen this year, it you know it's going to be totally insane. And in
43:15
the worst case it can go very very bad. Um and so I just wanted to be at a place that really understood that and kind of
43:20
really internalized that. And at Ant, you know, like if you overhear conversations in the lunchroom or in the
43:26
hallway, people are talking about AI safety. this is really the thing that everyone cares about more than anything.
43:32
Um, and so I just wanted to be in a place like that. I I know I know for me personally the mission is just so important.
43:37
What is gonna happen this year? Okay. So if you think back like six months ago and uh kind of what are the
43:44
predictions that people are making? So Daario predicted that 90% of the code at Anthropic would be would be written by
43:50
Quad. This is true. Um for me personally it's been 100% for like since Opus 4.5.
43:56
Um I just I uninstalled my IDE. I don't edit a single line of code by hand. It's just 100% quad code and Opus. Um and you
44:04
know I land you know like 20 PR a day every day. If you look at Enthropic overall it ranges between like 70 to 90%
44:11
uh you know depending on the team. For a lot of teams it's also like 100% for a lot of people it's 100%. And I remember
44:16
making this prediction back in May when we ged cloud code that you wouldn't need an ID to code anymore. Uh and it was
44:24
totally crazy to say. I feel like people in the audience gasped because it was such like a silly prediction at the time. But really all
44:29
it is is like you just like trace the you know the exponential and this is just like so deep in you know the DNA at cuz like you know three
44:36
of our founders were co-authors of the scaling laws paper they kind of they saw this very early and so this is just like
44:42
tracing the exponential this is what's going to happen and yes that happened. So continuing to trace the exponential I
How coding will change
44:47
think what will happen is coding will be generally solved for everyone. Um, and I think today coding is practically
44:53
solved, you know, for me and I think it'll be the case for everyone. Um, you know, regardless of domain, I think we're going to start to see the title
44:59
software engineer go away. And I think it's just going to be maybe builder, maybe product manager, maybe we'll keep
45:05
the title as kind of a vestigial thing, but the work that people do, it's not just going to be coding. It's software
45:10
engineers are also going to be writing specs. They're going to be talking to users. like this thing that we're starting to see right now in our team
45:16
where engineers are very much generalists and every single function on our team codes like our PM's code, our
45:22
designers code, our EM codes, our um like everyone our our finance guy codes
45:28
like everyone on our team codes. We're going to start to see this everywhere. So this is sort of uh this is kind of
45:33
like the lower bound if we just continue the trend. The upper bound I think is a lot scarier. Um, and this is something
45:40
like, you know, we hit ASL4. Um, and this, you know, at anthropic, we talked about these safety levels. ASL3 is where
45:46
the models are right now. ASL4 is the model is recursively self-improving. Um, and so if this happens, essentially, we
45:51
have to meet a bunch of criteria before we can release a model. And so the the extreme is that, you know, this happens
45:57
um or there's some kind of catastrophic misuse like people are using the model to design bioiruses, design zero days,
46:03
stuff like this. Um, and this is something that we're really really actively working on so that doesn't happen. I think uh it's just been
46:10
honestly it's just been like so exciting and humbling like seeing how people are using quad code like uh you know I just
46:15
wanted to build a cool thing and it ended up being really useful uh and that was so surprising and so exciting.
46:21
My impression from Twitter or just the outside is basically everyone went away over the holidays and then like found
Outro
46:26
out about Claude code and it's just been crazy ever since. Is that how it was for you at like internally? Did you were you
46:33
having like a nice Christmas break and then came back and like what happened? Well, actually for all of December, I
46:38
was traveling around. Uh, and I I took a coding vacation. So, we were kind of traveling around and I was just like coding every day. So, that was really
46:44
nice. Uh, and then I also started to use Twitter at the time cuz like I I worked on Threads back then way back when. So,
46:50
I've been a Threads user for a while. So, I just like tried to see kind of like other platforms where people are.
46:55
Yeah. I think for a lot of people they kind of discovered that was the moment where they discovered Opus 4.5. I kind of already knew.
47:00
Mhm. Uh, and internally quad code's just been on this like exponential tear for many many months now. So that just like it it
47:07
became even more steep. That's what we saw. And if you look at cloud code now, you know, there was some stat from
47:13
Mercury that like 70% of startups are you know choosing cloud as their model of choice. There was some other stat
47:19
from like semi analysis that 4% of all public commits are made by cloud code. um like of all code written everywhere.
47:26
All the companies, you know, use squad code from like the biggest companies to kind of, you know, smallest startups,
47:31
you know, like it it wrote it it plotted the course for Perseverance like for like the Mars rover. This is just like
47:37
this is the coolest thing for me. And we like we even printed posters cuz the team was like, "Wow, this is just like so cool that NASA chooses to use this
47:43
thing." So, yeah, it's just like it's humbling. Um but it also feels like the very beginning. What's the sort of interaction between uh claude code and
47:50
then co-work like you know was it a fork of cla code? Was it like you had cla
47:56
code look at the cloud code and say let's make a new spec for nontechnical
48:02
people that you know keeps all the lessons and then you know it sort of went off for a couple days and did that.
48:07
What's the genesis of that and you know where do you think that goes? This is going to be like my fifth time
48:12
using the word wait and demand. It was just that I mean like we we were looking at Twitter and there was like that one guy that was using quad code to
48:19
like monitor his tomato plants. Mhm. Uh there was like this other person that was using it to like recover wedding
48:24
photos off of a corrupted hard drive. There were people that using it for like uh for finance. When we looked
48:30
internally at anthropic, every designer is using it all the entire finance team at this point is using it. The entire
48:36
data science team is using it not for coding. People are jumping over hoops to install a thing in the terminal so that they could use this. So we knew for a
48:42
while that we wanted to build something and so we're experimenting with a bunch of different ideas and the thing that
48:47
kind of took off was just you know a little cloud code wrapper in a guey in the desktop app and that's all it is. It's just quad code under the hood. It's
48:53
the same agent. Oh wow. Um and uh Felix and the team and Felix was early Electron contributor. He kind
49:00
of knows that stack really well and he was hacking on various ideas and uh they they built it in I think something like
49:06
10 days. It was it was just like 100% written by quad code. Uh and it just felt ready to release. There was a lot
49:12
of stuff that we had to build for nontechnical users. So it's a little bit different than a technical audience. Uh
49:18
it runs in a all the code runs in a virtual machine. Uh there's a lot of delete uh protections for deletion and
49:24
things like this. There's a lot of permission prompting and kind of other guardrails for users. Um yeah, it was
49:31
honestly pretty obvious. Boris, thank you so much for making something that uh is taking away all my sleep, but in
49:38
return, it's making me feel creator mode again, sort of founder mode again. It's been an exhilarating 3 weeks. I like
49:45
can't believe I waited that long since November to actually get into it. Thank you so much for being with us. Thank you
49:50
for building what you're building. Yeah, thanks for having me. And uh send bugs.
49:56
Sounds good. Come
50:01
on now.