*AI Summary*
*# *Recommended Reviewers**
This material is best reviewed by a *Technical Committee of AI Systems Architects and Machine Learning Research Leads.* This group possess the necessary cross-disciplinary expertise in distributed systems, hardware-software co-design, and large-scale model optimization to evaluate the strategic and technical shifts described by Jeff Dean.
---
### *Abstract*
In this technical session, Jeff Dean, Chief AI Scientist at Google, outlines the architectural and organizational evolution of the Gemini era. The discussion centers on the "Pareto Frontier" strategy, where high-reasoning frontier models (Pro/Deep Think) serve as the necessary catalysts for high-efficiency, low-latency models (Flash) via advanced distillation. Dean emphasizes a paradigm shift in optimization: moving from FLOP-centric thinking to an energy-centric model, where the cost of data movement (picojoules per bit) is the primary bottleneck for future scaling.
Key technical disclosures include the history of Google’s in-memory search index (active since 2001), the co-design of TPUs to anticipate ML workloads 2–6 years in advance, and the strategic move toward unified, multimodal models over specialized symbolic systems. Dean predicts a future characterized by "illusionary" attention across trillions of tokens, personalized AI agents acting as managed "sub-teams," and a leap in inference speeds to 10,000 tokens per second to facilitate deep reasoning rollouts.
---
### *Strategic Technical Summary*
* *0:01:31 Frontier vs. Flash & Distillation Strategy:* Google’s model strategy is built on the Pareto frontier. Frontier models (Pro) define the limits of capability, while Flash models provide the economic and latency-optimized deployment. Distillation is the engine that allows Flash models of the current generation to outperform Pro models of the previous generation.
* *0:05:09 The Role of Logits in Distillation:* Distillation allows smaller models to capture the "soft supervision" of the larger model’s logits, which provides more information than hard labels alone. This process is essential for maintaining reasoning capabilities in lightweight architectures.
* *0:08:15 Latency as a Primary Constraint:* Lowering latency is not just a UX improvement but a functional requirement for agentic workflows. As models are asked to perform more complex, multi-token tasks, the "tokens per second" metric determines the feasibility of the task itself.
* *0:15:01 Attending to Trillions of Tokens:* Current quadratic attention mechanisms are insufficient for trillion-token contexts. The goal is to develop systems that provide the "illusion" of attending to the entire internet or a user’s total personal history by narrowing focus through multi-stage retrieval and algorithmic refinements.
* *0:20:11 Evolution from Google Search:* Modern LLM retrieval pipelines mirror the evolution of Google Search. In 2001, Google moved its entire index to memory to allow for "soft" query semantics (synonyms, intent), which was a precursor to the semantic embedding space used by LLMs today.
* *0:27:11 Systems Design Principles:* A robust system should be designed to scale by a factor of 5x to 10x. Once a metric hits 100x (e.g., traffic or index size), the design space usually shifts fundamentally—such as moving from disk-based to memory-based indices.
* *0:32:09 Energy-Based Scaling (The 1000:1 Rule):* Computation is cheap; data motion is expensive. A matrix multiply costs ~1 picojoule, while moving that data across a chip costs ~1,000 picojoules. Batching is a strategy to amortize the energy cost of moving weights from memory to the multiplier units.
* *0:36:16 TPU Co-Design Loop:* TPU development requires a 2- to 6-year lookahead. Google’s advantage stems from the feedback loop between ML researchers and hardware architects, allowing for "speculative" hardware features that anticipate future architectural shifts (e.g., lower precision, sparsity).
* *0:42:21 RL in Non-Verifiable Domains:* A major research frontier is applying Reinforcement Learning (RL) to domains that lack a "ground truth" checker (unlike math or code). This may involve using models as critics to evaluate and rate the relevance of retrieved data.
* *0:46:27 Unified vs. Specialized Models:* Dean argues that unified multimodal models will consistently outperform specialized symbolic systems. Human reasoning handles symbols through distributed neural representations; models should do the same rather than rely on discrete symbolic modules.
* *0:52:14 Capacity and Knowledge Retrieval:* Large models should not waste parameter space memorizing obscure facts that can be retrieved. The ideal architecture maximizes parameter space for "reasoning" while relying on high-bandwidth retrieval for "knowledge."
* *1:00:31 The History of Scaling:* Since his 1990 thesis, Dean’s core mantra has been "Bigger model, more data, better results." Successes in speech (2011) and vision (2012) were driven by early adopters of model and data parallelism on CPU clusters before the advent of the TPU.
* *1:07:15 The Gemini Origin Story:* The Gemini project was initiated by a one-page memo from Dean to unify fragmented efforts across Google Brain and DeepMind. The name refers to "twins coming together" and is a nod to the NASA project preceding Apollo.
* *1:11:38 Managing "50 AI Interns":* Future software engineering will shift toward managing sub-teams of agents. The core skill for engineers will be the ability to write "crisp specifications" (English-language prompts) to eliminate ambiguity in agent execution.
* *1:21:29 The 10,000 Tokens/Sec Vision:* Future hardware will support speeds of 10,000 tokens/sec. This isn't for faster reading, but for "Deep Thinking"—allowing a model to perform massive parallel rollouts and internal reasoning chains before presenting a concise, high-quality result.
AI-generated summary created with gemini-3-flash-preview for free via RocketRecap-dot-com. (Input: 39,119 tokens, Output: 1,333 tokens, Est. cost: $0.02).
Below, I will provide input for an example video (comprising of title, description, and transcript, in this order) and the corresponding abstract and summary I expect. Afterward, I will provide a new transcript that I want a summarization in the same format.
**Please give an abstract of the transcript and then summarize the transcript in a self-contained bullet list format.** Include starting timestamps, important details and key takeaways.
Example Input:
Fluidigm Polaris Part 2- illuminator and camera
mikeselectricstuff
131K subscribers
Subscribed
369
Share
Download
Clip
Save
5,857 views Aug 26, 2024
Fluidigm Polaris part 1 : • Fluidigm Polaris (Part 1) - Biotech g...
Ebay listings: https://www.ebay.co.uk/usr/mikeselect...
Merch https://mikeselectricstuff.creator-sp...
Transcript
Follow along using the transcript.
Show transcript
mikeselectricstuff
131K subscribers
Videos
About
Support on Patreon
40 Comments
@robertwatsonbath
6 hours ago
Thanks Mike. Ooof! - with the level of bodgery going on around 15:48 I think shame would have made me do a board re spin, out of my own pocket if I had to.
1
Reply
@Muonium1
9 hours ago
The green LED looks different from the others and uses phosphor conversion because of the "green gap" problem where green InGaN emitters suffer efficiency droop at high currents. Phosphide based emitters don't start becoming efficient until around 600nm so also can't be used for high power green emitters. See the paper and plot by Matthias Auf der Maur in his 2015 paper on alloy fluctuations in InGaN as the cause of reduced external quantum efficiency at longer (green) wavelengths.
4
Reply
1 reply
@tafsirnahian669
10 hours ago (edited)
Can this be used as an astrophotography camera?
Reply
mikeselectricstuff
·
1 reply
@mikeselectricstuff
6 hours ago
Yes, but may need a shutter to avoid light during readout
Reply
@2010craggy
11 hours ago
Narrowband filters we use in Astronomy (Astrophotography) are sided- they work best passing light in one direction so I guess the arrows on the filter frames indicate which way round to install them in the filter wheel.
1
Reply
@vitukz
12 hours ago
A mate with Channel @extractions&ire could use it
2
Reply
@RobertGallop
19 hours ago
That LED module says it can go up to 28 amps!!! 21 amps for 100%. You should see what it does at 20 amps!
Reply
@Prophes0r
19 hours ago
I had an "Oh SHIT!" moment when I realized that the weird trapezoidal shape of that light guide was for keystone correction of the light source.
Very clever.
6
Reply
@OneBiOzZ
20 hours ago
given the cost of the CCD you think they could have run another PCB for it
9
Reply
@tekvax01
21 hours ago
$20 thousand dollars per minute of run time!
1
Reply
@tekvax01
22 hours ago
"We spared no expense!" John Hammond Jurassic Park.
*(that's why this thing costs the same as a 50-seat Greyhound Bus coach!)
Reply
@florianf4257
22 hours ago
The smearing on the image could be due to the fact that you don't use a shutter, so you see brighter stripes under bright areas of the image as you still iluminate these pixels while the sensor data ist shifted out towards the top. I experienced this effect back at university with a LN-Cooled CCD for Spectroscopy. The stripes disapeared as soon as you used the shutter instead of disabling it in the open position (but fokussing at 100ms integration time and continuous readout with a focal plane shutter isn't much fun).
12
Reply
mikeselectricstuff
·
1 reply
@mikeselectricstuff
12 hours ago
I didn't think of that, but makes sense
2
Reply
@douro20
22 hours ago (edited)
The red LED reminds me of one from Roithner Lasertechnik. I have a Symbol 2D scanner which uses two very bright LEDs from that company, one red and one red-orange. The red-orange is behind a lens which focuses it into an extremely narrow beam.
1
Reply
@RicoElectrico
23 hours ago
PFG is Pulse Flush Gate according to the datasheet.
Reply
@dcallan812
23 hours ago
Very interesting. 2x
Reply
@littleboot_
1 day ago
Cool interesting device
Reply
@dav1dbone
1 day ago
I've stripped large projectors, looks similar, wonder if some of those castings are a magnesium alloy?
Reply
@kevywevvy8833
1 day ago
ironic that some of those Phlatlight modules are used in some of the cheapest disco lights.
1
Reply
1 reply
@bill6255
1 day ago
Great vid - gets right into subject in title, its packed with information, wraps up quickly. Should get a YT award! imho
3
Reply
@JAKOB1977
1 day ago (edited)
The whole sensor module incl. a 5 grand 50mpix sensor for 49 £.. highest bid atm
Though also a limited CCD sensor, but for the right buyer its a steal at these relative low sums.
Architecture Full Frame CCD (Square Pixels)
Total Number of Pixels 8304 (H) × 6220 (V) = 51.6 Mp
Number of Effective Pixels 8208 (H) × 6164 (V) = 50.5 Mp
Number of Active Pixels 8176 (H) × 6132 (V) = 50.1 Mp
Pixel Size 6.0 m (H) × 6.0 m (V)
Active Image Size 49.1 mm (H) × 36.8 mm (V)
61.3 mm (Diagonal),
645 1.1x Optical Format
Aspect Ratio 4:3
Horizontal Outputs 4
Saturation Signal 40.3 ke−
Output Sensitivity 31 V/e−
Quantum Efficiency
KAF−50100−CAA
KAF−50100−AAA
KAF−50100−ABA (with Lens)
22%, 22%, 16% (Peak R, G, B)
25%
62%
Read Noise (f = 18 MHz) 12.5 e−
Dark Signal (T = 60°C) 42 pA/cm2
Dark Current Doubling Temperature 5.7°C
Dynamic Range (f = 18 MHz) 70.2 dB
Estimated Linear Dynamic Range
(f = 18 MHz)
69.3 dB
Charge Transfer Efficiency
Horizontal
Vertical
0.999995
0.999999
Blooming Protection
(4 ms Exposure Time)
800X Saturation Exposure
Maximum Date Rate 18 MHz
Package Ceramic PGA
Cover Glass MAR Coated, 2 Sides or
Clear Glass
Features
• TRUESENSE Transparent Gate Electrode
for High Sensitivity
• Ultra-High Resolution
• Board Dynamic Range
• Low Noise Architecture
• Large Active Imaging Area
Applications
• Digitization
• Mapping/Aerial
• Photography
• Scientific
Thx for the tear down Mike, always a joy
Reply
@martinalooksatthings
1 day ago
15:49 that is some great bodging on of caps, they really didn't want to respin that PCB huh
8
Reply
@RhythmGamer
1 day ago
Was depressed today and then a new mike video dropped and now I’m genuinely happy to get my tear down fix
1
Reply
@dine9093
1 day ago (edited)
Did you transfrom into Mr Blobby for a moment there?
2
Reply
@NickNorton
1 day ago
Thanks Mike. Your videos are always interesting.
5
Reply
@KeritechElectronics
1 day ago
Heavy optics indeed... Spare no expense, cost no object. Splendid build quality. The CCD is a thing of beauty!
1
Reply
@YSoreil
1 day ago
The pricing on that sensor is about right, I looked in to these many years ago when they were still in production since it's the only large sensor you could actually buy. Really cool to see one in the wild.
2
Reply
@snik2pl
1 day ago
That leds look like from led projector
Reply
@vincei4252
1 day ago
TDI = Time Domain Integration ?
1
Reply
@wolpumba4099
1 day ago (edited)
Maybe the camera should not be illuminated during readout.
From the datasheet of the sensor (Onsemi): saturation 40300 electrons, read noise 12.5 electrons per pixel @ 18MHz (quite bad). quantum efficiency 62% (if it has micro lenses), frame rate 1 Hz. lateral overflow drain to prevent blooming protects against 800x (factor increases linearly with exposure time) saturation exposure (32e6 electrons per pixel at 4ms exposure time), microlens has +/- 20 degree acceptance angle
i guess it would be good for astrophotography
4
Reply
@txm100
1 day ago (edited)
Babe wake up a new mikeselectricstuff has dropped!
9
Reply
@vincei4252
1 day ago
That looks like a finger-lakes filter wheel, however, for astronomy they'd never use such a large stepper.
1
Reply
@MRooodddvvv
1 day ago
yaaaaay ! more overcomplicated optical stuff !
4
Reply
1 reply
@NoPegs
1 day ago
He lives!
11
Reply
1 reply
Transcript
0:00
so I've stripped all the bits of the
0:01
optical system so basically we've got
0:03
the uh the camera
0:05
itself which is mounted on this uh very
0:09
complex
0:10
adjustment thing which obviously to set
0:13
you the various tilt and uh alignment
0:15
stuff then there's two of these massive
0:18
lenses I've taken one of these apart I
0:20
think there's something like about eight
0:22
or nine Optical elements in here these
0:25
don't seem to do a great deal in terms
0:26
of electr magnification they're obiously
0:28
just about getting the image to where it
0:29
uh where it needs to be just so that
0:33
goes like that then this Optical block I
0:36
originally thought this was made of some
0:37
s crazy heavy material but it's just
0:39
really the sum of all these Optical bits
0:41
are just ridiculously heavy those lenses
0:43
are about 4 kilos each and then there's
0:45
this very heavy very solid um piece that
0:47
goes in the middle and this is so this
0:49
is the filter wheel assembly with a
0:51
hilariously oversized steper
0:53
motor driving this wheel with these very
0:57
large narrow band filters so we've got
1:00
various different shades of uh
1:03
filters there five Al together that
1:06
one's actually just showing up a silver
1:07
that's actually a a red but fairly low
1:10
transmission orangey red blue green
1:15
there's an excess cover on this side so
1:16
the filters can be accessed and changed
1:19
without taking anything else apart even
1:21
this is like ridiculous it's like solid
1:23
aluminium this is just basically a cover
1:25
the actual wavelengths of these are um
1:27
488 525 570 630 and 700 NM not sure what
1:32
the suffix on that perhaps that's the uh
1:34
the width of the spectral line say these
1:37
are very narrow band filters most of
1:39
them are you very little light through
1:41
so it's still very tight narrow band to
1:43
match the um fluoresence of the dies
1:45
they're using in the biochemical process
1:48
and obviously to reject the light that's
1:49
being fired at it from that Illuminator
1:51
box and then there's a there's a second
1:53
one of these lenses then the actual sort
1:55
of samples below that so uh very serious
1:58
amount of very uh chunky heavy Optics
2:01
okay let's take a look at this light
2:02
source made by company Lumen Dynamics
2:04
who are now part of
2:06
excelitas self-contained unit power
2:08
connector USB and this which one of the
2:11
Cable Bundle said was a TTL interface
2:14
USB wasn't used in uh the fluid
2:17
application output here and I think this
2:19
is an input for um light feedback I
2:21
don't if it's regulated or just a measur
2:23
measurement facility and the uh fiber
2:27
assembly
2:29
Square Inlet there and then there's two
2:32
outputs which have uh lens assemblies
2:35
and this small one which goes back into
2:37
that small Port just Loops out of here
2:40
straight back in So on this side we've
2:42
got the electronics which look pretty
2:44
straightforward we've got a bit of power
2:45
supply stuff over here and we've got
2:48
separate drivers for each wavelength now
2:50
interesting this is clearly been very
2:52
specifically made for this application
2:54
you I was half expecting like say some
2:56
generic drivers that could be used for a
2:58
number of different things but actually
3:00
literally specified the exact wavelength
3:02
on the PCB there is provision here for
3:04
385 NM which isn't populated but this is
3:07
clearly been designed very specifically
3:09
so these four drivers look the same but
3:10
then there's two higher power ones for
3:12
575 and
3:14
520 a slightly bigger heat sink on this
3:16
575 section there a p 24 which is
3:20
providing USB interface USB isolator the
3:23
USB interface just presents as a comport
3:26
I did have a quick look but I didn't
3:27
actually get anything sensible um I did
3:29
dump the Pi code out and there's a few
3:31
you a few sort of commands that you
3:32
could see in text but I didn't actually
3:34
manage to get it working properly I
3:36
found some software for related version
3:38
but it didn't seem to want to talk to it
3:39
but um I say that wasn't used for the
3:41
original application it might be quite
3:42
interesting to get try and get the Run
3:44
hours count out of it and the TTL
3:46
interface looks fairly straightforward
3:48
we've got positions for six opto
3:50
isolators but only five five are
3:52
installed so that corresponds with the
3:54
unused thing so I think this hopefully
3:56
should be as simple as just providing a
3:57
ttrl signal for each color to uh enable
4:00
it a big heat sink here which is there I
4:03
think there's like a big S of metal
4:04
plate through the middle of this that
4:05
all the leads are mounted on the other
4:07
side so this is heat sinking it with a
4:09
air flow from a uh just a fan in here
4:13
obviously don't have the air flow
4:14
anywhere near the Optics so conduction
4:17
cool through to this plate that's then
4:18
uh air cooled got some pots which are
4:21
presumably power
4:22
adjustments okay let's take a look at
4:24
the other side which is uh much more
4:27
interesting see we've got some uh very
4:31
uh neatly Twisted cable assemblies there
4:35
a bunch of leads so we've got one here
4:37
475 up here 430 NM 630 575 and 520
4:44
filters and dcro mirrors a quick way to
4:48
see what's white is if we just shine
4:49
some white light through
4:51
here not sure how it is is to see on the
4:54
camera but shining white light we do
4:55
actually get a bit of red a bit of blue
4:57
some yellow here so the obstacle path
5:00
575 it goes sort of here bounces off
5:03
this mirror and goes out the 520 goes
5:07
sort of down here across here and up
5:09
there 630 goes basically straight
5:13
through
5:15
430 goes across there down there along
5:17
there and the 475 goes down here and
5:20
left this is the light sensing thing
5:22
think here there's just a um I think
5:24
there a photo diode or other sensor
5:26
haven't actually taken that off and
5:28
everything's fixed down to this chunk of
5:31
aluminium which acts as the heat
5:32
spreader that then conducts the heat to
5:33
the back side for the heat
5:35
sink and the actual lead packages all
5:38
look fairly similar except for this one
5:41
on the 575 which looks quite a bit more
5:44
substantial big spay
5:46
Terminals and the interface for this
5:48
turned out to be extremely simple it's
5:50
literally a 5V TTL level to enable each
5:54
color doesn't seem to be any tensity
5:56
control but there are some additional
5:58
pins on that connector that weren't used
5:59
in the through time thing so maybe
6:01
there's some extra lines that control
6:02
that I couldn't find any data on this uh
6:05
unit and the um their current product
6:07
range is quite significantly different
6:09
so we've got the uh blue these
6:13
might may well be saturating the camera
6:16
so they might look a bit weird so that's
6:17
the 430
6:18
blue the 575
6:24
yellow uh
6:26
475 light blue
6:29
the uh 520
6:31
green and the uh 630 red now one
6:36
interesting thing I noticed for the
6:39
575 it's actually it's actually using a
6:42
white lead and then filtering it rather
6:44
than using all the other ones are using
6:46
leads which are the fundamental colors
6:47
but uh this is actually doing white and
6:50
it's a combination of this filter and
6:52
the dichroic mirrors that are turning to
6:55
Yellow if we take the filter out and a
6:57
lot of the a lot of the um blue content
7:00
is going this way the red is going
7:02
straight through these two mirrors so
7:05
this is clearly not reflecting much of
7:08
that so we end up with the yellow coming
7:10
out of uh out of there which is a fairly
7:14
light yellow color which you don't
7:16
really see from high intensity leads so
7:19
that's clearly why they've used the
7:20
white to uh do this power consumption of
7:23
the white is pretty high so going up to
7:25
about 2 and 1 half amps on that color
7:27
whereas most of the other colors are
7:28
only drawing half an amp or so at 24
7:30
volts the uh the green is up to about
7:32
1.2 but say this thing is uh much
7:35
brighter and if you actually run all the
7:38
colors at the same time you get a fairly
7:41
reasonable um looking white coming out
7:43
of it and one thing you might just be
7:45
out to notice is there is some sort
7:46
color banding around here that's not
7:49
getting uh everything s completely
7:51
concentric and I think that's where this
7:53
fiber optic thing comes
7:58
in I'll
8:00
get a couple of Fairly accurately shaped
8:04
very sort of uniform color and looking
8:06
at What's um inside here we've basically
8:09
just got this Square Rod so this is
8:12
clearly yeah the lights just bouncing
8:13
off all the all the various sides to um
8:16
get a nice uniform illumination uh this
8:19
back bit looks like it's all potted so
8:21
nothing I really do to get in there I
8:24
think this is fiber so I have come
8:26
across um cables like this which are
8:27
liquid fill but just looking through the
8:30
end of this it's probably a bit hard to
8:31
see it does look like there fiber ends
8:34
going going on there and so there's this
8:36
feedback thing which is just obviously
8:39
compensating for the any light losses
8:41
through here to get an accurate
8:43
representation of uh the light that's
8:45
been launched out of these two
8:47
fibers and you see uh
8:49
these have got this sort of trapezium
8:54
shape light guides again it's like a
8:56
sort of acrylic or glass light guide
9:00
guess projected just to make the right
9:03
rectangular
9:04
shape and look at this Center assembly
9:07
um the light output doesn't uh change
9:10
whether you feed this in or not so it's
9:11
clear not doing any internal Clos Loop
9:14
control obviously there may well be some
9:16
facility for it to do that but it's not
9:17
being used in this
9:19
application and so this output just
9:21
produces a voltage on the uh outle
9:24
connector proportional to the amount of
9:26
light that's present so there's a little
9:28
diffuser in the back there
9:30
and then there's just some kind of uh
9:33
Optical sensor looks like a
9:35
chip looking at the lead it's a very
9:37
small package on the PCB with this lens
9:40
assembly over the top and these look
9:43
like they're actually on a copper
9:44
Metalized PCB for maximum thermal
9:47
performance and yeah it's a very small
9:49
package looks like it's a ceramic
9:51
package and there's a thermister there
9:53
for temperature monitoring this is the
9:56
475 blue one this is the 520 need to
9:59
Green which is uh rather different OB
10:02
it's a much bigger D with lots of bond
10:04
wise but also this looks like it's using
10:05
a phosphor if I shine a blue light at it
10:08
lights up green so this is actually a
10:10
phosphor conversion green lead which
10:12
I've I've come across before they want
10:15
that specific wavelength so they may be
10:17
easier to tune a phosphor than tune the
10:20
um semiconductor material to get the uh
10:23
right right wavelength from the lead
10:24
directly uh red 630 similar size to the
10:28
blue one or does seem to have a uh a
10:31
lens on top of it there is a sort of red
10:33
coloring to
10:35
the die but that doesn't appear to be
10:38
fluorescent as far as I can
10:39
tell and the white one again a little
10:41
bit different sort of much higher
10:43
current
10:46
connectors a makeer name on that
10:48
connector flot light not sure if that's
10:52
the connector or the lead
10:54
itself and obviously with the phosphor
10:56
and I'd imagine that phosphor may well
10:58
be tuned to get the maximum to the uh 5
11:01
cenm and actually this white one looks
11:04
like a St fairly standard product I just
11:06
found it in Mouse made by luminous
11:09
devices in fact actually I think all
11:11
these are based on various luminous
11:13
devices modules and they're you take
11:17
looks like they taking the nearest
11:18
wavelength and then just using these
11:19
filters to clean it up to get a precise
11:22
uh spectral line out of it so quite a
11:25
nice neat and um extreme
11:30
bright light source uh sure I've got any
11:33
particular use for it so I think this
11:35
might end up on
11:36
eBay but uh very pretty to look out and
11:40
without the uh risk of burning your eyes
11:43
out like you do with lasers so I thought
11:45
it would be interesting to try and
11:46
figure out the runtime of this things
11:48
like this we usually keep some sort
11:49
record of runtime cuz leads degrade over
11:51
time I couldn't get any software to work
11:52
through the USB face but then had a
11:54
thought probably going to be writing the
11:55
runtime periodically to the e s prom so
11:58
I just just scope up that and noticed it
12:00
was doing right every 5 minutes so I
12:02
just ran it for a while periodically
12:04
reading the E squ I just held the pick
12:05
in in reset and um put clip over to read
12:07
the square prom and found it was writing
12:10
one location per color every 5 minutes
12:12
so if one color was on it would write
12:14
that location every 5 minutes and just
12:16
increment it by one so after doing a few
12:18
tests with different colors of different
12:19
time periods it looked extremely
12:21
straightforward it's like a four bite
12:22
count for each color looking at the
12:24
original data that was in it all the
12:26
colors apart from Green were reading
12:28
zero and the green was reading four
12:30
indicating a total 20 minutes run time
12:32
ever if it was turned on run for a short
12:34
time then turned off that might not have
12:36
been counted but even so indicates this
12:37
thing wasn't used a great deal the whole
12:40
s process of doing a run can be several
12:42
hours but it'll only be doing probably
12:43
the Imaging at the end of that so you
12:46
wouldn't expect to be running for a long
12:47
time but say a single color for 20
12:50
minutes over its whole lifetime does
12:52
seem a little bit on the low side okay
12:55
let's look at the camera un fortunately
12:57
I managed to not record any sound when I
12:58
did this it's also a couple of months
13:00
ago so there's going to be a few details
13:02
that I've forgotten so I'm just going to
13:04
dub this over the original footage so um
13:07
take the lid off see this massive great
13:10
heat sink so this is a pel cool camera
13:12
we've got this blower fan producing a
13:14
fair amount of air flow through
13:16
it the connector here there's the ccds
13:19
mounted on the board on the
13:24
right this unplugs so we've got a bit of
13:27
power supply stuff on here
13:29
USB interface I think that's the Cyprus
13:32
microcontroller High speeded USB
13:34
interface there's a zyink spon fpga some
13:40
RAM and there's a couple of ATD
13:42
converters can't quite read what those
13:45
those are but anal
13:47
devices um little bit of bodgery around
13:51
here extra decoupling obviously they
13:53
have having some noise issues this is
13:55
around the ram chip quite a lot of extra
13:57
capacitors been added there
13:59
uh there's a couple of amplifiers prior
14:01
to the HD converter buffers or Andor
14:05
amplifiers taking the CCD
14:08
signal um bit more power spy stuff here
14:11
this is probably all to do with
14:12
generating the various CCD bias voltages
14:14
they uh need quite a lot of exotic
14:18
voltages next board down is just a
14:20
shield and an interconnect
14:24
boardly shielding the power supply stuff
14:26
from some the more sensitive an log
14:28
stuff
14:31
and this is the bottom board which is
14:32
just all power supply
14:34
stuff as you can see tons of capacitors
14:37
or Transformer in
14:42
there and this is the CCD which is a uh
14:47
very impressive thing this is a kf50 100
14:50
originally by true sense then codec
14:53
there ON
14:54
Semiconductor it's 50 megapixels uh the
14:58
only price I could find was this one
15:00
5,000 bucks and the architecture you can
15:03
see there actually two separate halves
15:04
which explains the Dual AZ converters
15:06
and two amplifiers it's literally split
15:08
down the middle and duplicated so it's
15:10
outputting two streams in parallel just
15:13
to keep the bandwidth sensible and it's
15:15
got this amazing um diffraction effects
15:18
it's got micro lenses over the pixel so
15:20
there's there's a bit more Optics going
15:22
on than on a normal
15:25
sensor few more bodges on the CCD board
15:28
including this wire which isn't really
15:29
tacked down very well which is a bit uh
15:32
bit of a mess quite a few bits around
15:34
this board where they've uh tacked
15:36
various bits on which is not super
15:38
impressive looks like CCD drivers on the
15:40
left with those 3 ohm um damping
15:43
resistors on the
15:47
output get a few more little bodges
15:50
around here some of
15:52
the and there's this separator the
15:54
silica gel to keep the moisture down but
15:56
there's this separator that actually
15:58
appears to be cut from piece of
15:59
antistatic
16:04
bag and this sort of thermal block on
16:06
top of this stack of three pel Cola
16:12
modules so as with any Stacks they get
16:16
um larger as they go back towards the
16:18
heat sink because each P's got to not
16:20
only take the heat from the previous but
16:21
also the waste heat which is quite
16:27
significant you see a little temperature
16:29
sensor here that copper block which
16:32
makes contact with the back of the
16:37
CCD and this's the back of the
16:40
pelas this then contacts the heat sink
16:44
on the uh rear there a few thermal pads
16:46
as well for some of the other power
16:47
components on this
16:51
PCB okay I've connected this uh camera
16:54
up I found some drivers on the disc that
16:56
seem to work under Windows 7 couldn't
16:58
get to install under Windows 11 though
17:01
um in the absence of any sort of lens or
17:03
being bothered to the proper amount I've
17:04
just put some f over it and put a little
17:06
pin in there to make a pinhole lens and
17:08
software gives a few options I'm not
17:11
entirely sure what all these are there's
17:12
obviously a clock frequency 22 MHz low
17:15
gain and with PFG no idea what that is
17:19
something something game programmable
17:20
Something game perhaps ver exposure
17:23
types I think focus is just like a
17:25
continuous grab until you tell it to
17:27
stop not entirely sure all these options
17:30
are obviously exposure time uh triggers
17:33
there ex external hardware trigger inut
17:35
you just trigger using a um thing on
17:37
screen so the resolution is 8176 by
17:40
6132 and you can actually bin those
17:42
where you combine multiple pixels to get
17:46
increased gain at the expense of lower
17:48
resolution down this is a 10sec exposure
17:51
obviously of the pin hole it's very uh
17:53
intensitive so we just stand still now
17:56
downloading it there's the uh exposure
17:59
so when it's
18:01
um there's a little status thing down
18:03
here so that tells you the um exposure
18:07
[Applause]
18:09
time it's this is just it
18:15
downloading um it is quite I'm seeing
18:18
quite a lot like smearing I think that I
18:20
don't know whether that's just due to
18:21
pixels overloading or something else I
18:24
mean yeah it's not it's not um out of
18:26
the question that there's something not
18:27
totally right about this camera
18:28
certainly was bodge wise on there um I
18:31
don't I'd imagine a camera like this
18:32
it's got a fairly narrow range of
18:34
intensities that it's happy with I'm not
18:36
going to spend a great deal of time on
18:38
this if you're interested in this camera
18:40
maybe for astronomy or something and
18:42
happy to sort of take the risk of it may
18:44
not be uh perfect I'll um I think I'll
18:47
stick this on eBay along with the
18:48
Illuminator I'll put a link down in the
18:50
description to the listing take your
18:52
chances to grab a bargain so for example
18:54
here we see this vertical streaking so
18:56
I'm not sure how normal that is this is
18:58
on fairly bright scene looking out the
19:02
window if I cut the exposure time down
19:04
on that it's now 1 second
19:07
exposure again most of the image
19:09
disappears again this is looks like it's
19:11
possibly over still overloading here go
19:14
that go down to say say quarter a
19:16
second so again I think there might be
19:19
some Auto gain control going on here um
19:21
this is with the PFG option let's try
19:23
turning that off and see what
19:25
happens so I'm not sure this is actually
19:27
more streaking or which just it's
19:29
cranked up the gain all the dis display
19:31
gray scale to show what um you know the
19:33
range of things that it's captured
19:36
there's one of one of 12 things in the
19:38
software there's um you can see of you
19:40
can't seem to read out the temperature
19:42
of the pelta cooler but you can set the
19:44
temperature and if you said it's a
19:46
different temperature you see the power
19:48
consumption jump up running the cooler
19:50
to get the temperature you requested but
19:52
I can't see anything anywhere that tells
19:54
you whether the cool is at the at the
19:56
temperature other than the power
19:57
consumption going down and there's no
19:59
temperature read out
20:03
here and just some yeah this is just
20:05
sort of very basic software I'm sure
20:07
there's like an API for more
20:09
sophisticated
20:10
applications but so if you know anything
20:12
more about these cameras please um stick
20:14
in the
20:15
comments um incidentally when I was
20:18
editing I didn't notice there was a bent
20:19
pin on the um CCD but I did fix that
20:22
before doing these tests and also
20:24
reactivated the um silica gel desicant
20:26
cuz I noticed it was uh I was getting
20:28
bit of condensation on the window but um
20:31
yeah so a couple of uh interesting but
20:34
maybe not particularly uh useful pieces
20:37
of Kit except for someone that's got a
20:38
very specific use so um I'll stick a
20:42
I'll stick these on eBay put a link in
20:44
the description and say hopefully
20:45
someone could actually make some uh good
20:47
use of these things
Example Output:
**Abstract:**
This video presents Part 2 of a teardown focusing on the optical components of a Fluidigm Polaris biotechnology instrument, specifically the multi-wavelength illuminator and the high-resolution CCD camera.
The Lumen Dynamics illuminator unit is examined in detail, revealing its construction using multiple high-power LEDs (430nm, 475nm, 520nm, 575nm, 630nm) combined via dichroic mirrors and filters. A square fiber optic rod is used to homogenize the light. A notable finding is the use of a phosphor-converted white LED filtered to achieve the 575nm output. The unit features simple TTL activation for each color, conduction cooling, and internal homogenization optics. Analysis of its EEPROM suggests extremely low operational runtime.
The camera module teardown showcases a 50 Megapixel ON Semiconductor KAF-50100 CCD sensor with micro-lenses, cooled by a multi-stage Peltier stack. The control electronics include an FPGA and a USB interface. Significant post-manufacturing modifications ("bodges") are observed on the camera's circuit boards. Basic functional testing using vendor software and a pinhole lens confirms image capture but reveals prominent vertical streaking artifacts, the cause of which remains uncertain (potential overload, readout artifact, or fault).
**Exploring the Fluidigm Polaris: A Detailed Look at its High-End Optics and Camera System**
* **0:00 High-End Optics:** The system utilizes heavy, high-quality lenses and mirrors for precise imaging, weighing around 4 kilos each.
* **0:49 Narrow Band Filters:** A filter wheel with five narrow band filters (488, 525, 570, 630, and 700 nm) ensures accurate fluorescence detection and rejection of excitation light.
* **2:01 Customizable Illumination:** The Lumen Dynamics light source offers five individually controllable LED wavelengths (430, 475, 520, 575, 630 nm) with varying power outputs. The 575nm yellow LED is uniquely achieved using a white LED with filtering.
* **3:45 TTL Control:** The light source is controlled via a simple TTL interface, enabling easy on/off switching for each LED color.
* **12:55 Sophisticated Camera:** The system includes a 50-megapixel Kodak KAI-50100 CCD camera with a Peltier cooling system for reduced noise.
* **14:54 High-Speed Data Transfer:** The camera features dual analog-to-digital converters to manage the high data throughput of the 50-megapixel sensor, which is effectively two 25-megapixel sensors operating in parallel.
* **18:11 Possible Issues:** The video creator noted some potential issues with the camera, including image smearing.
* **18:11 Limited Dynamic Range:** The camera's sensor has a limited dynamic range, making it potentially challenging to capture scenes with a wide range of brightness levels.
* **11:45 Low Runtime:** Internal data suggests the system has seen minimal usage, with only 20 minutes of recorded runtime for the green LED.
* **20:38 Availability on eBay:** Both the illuminator and camera are expected to be listed for sale on eBay.
Here is the real transcript. What would be a good group of people to review this topic? Please summarize provide a summary like they would:
The AI Frontier: from Gemini 3 Deep Think distilling to Flash — Jeff Dean
Latent Space
40.9K subscribers
Subscribe
212
Share
Ask
Save
5,775 views Feb 12, 2026 Latent Space - The AI Engineer Podcast (Video Podcast)
From rewriting Google’s search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.
Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google’s AI teams, and why the next leap won’t come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.
We discuss:
• Jeff’s early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years
• The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems
• Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations
• Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good
• Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec
• Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization
• TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon
• Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction
• Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense
• Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents
• Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants
• Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration
• Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn’t blind; the pieces had to multiply together
Substack Article w/Show Notes: https://www.latent.space/p/jeffdean
—
Jeff Dean
• LinkedIn: / jeff-dean-8b212555
• X: https://x.com/jeffdean
Google
• https://google.com
• https://deepmind.google
00:00:00 Intro
00:01:31 Frontier vs Flash & Distillation Strategy
00:05:09 Distillation, RL & Flash Economic Advantage
00:07:35 Flash in Products + Importance of Latency
00:11:11 Benchmarks, Long Context & Real Use Cases
00:15:01 Attending to Trillions of Tokens & Multimodality
00:20:11 LLM Search & Google Search Evolution
00:24:09 Systems Design Principles + Latency Numbers
00:32:09 Energy, Batching & TPU Co-Design
00:42:21 Research Frontiers: Reliability & RL Challenges
00:46:27 Unified Models vs Symbolic Systems (IMO)
00:50:38 Knowledge vs Reasoning + Vertical/Modular Models
00:55:58 Multilingual + Low-Resource Language Insights
00:57:58 Vision-Language Representations Example
01:07:15 Gemini Origin Story + Organizational Memo
01:09:27 Coding with AI & Agent Interaction Style
01:14:26 Prompting Skills & Spec Design
01:19:54 Latency Predictions & Tokens/sec Vision
01:21:29 Future Predictions: Personal Models & Hardware
01:23:11 Closing
This Latent Space podcast episode explores the AI Pareto frontier with Jeff Dean. The discussion delves into model efficiency, hardware advancements, and the challenges of balancing cutting-edge capabilities with real-world deployment needs. Listen to discover how distillation and other techniques enable broader AI accessibility.
Summary
Ask
Get answers, explore topics, and more
Ask questions
Chapters
View all
People mentioned
1 person
Jeff Dean
Explore the podcast
186 episodes
Latent Space - The AI Engineer Podcast (Video Podcast)
Latent Space
Podcasts
Transcript
Follow along using the transcript.
Show transcript
Latent Space
40.9K subscribers
Videos
About
13 Comments
Add a comment...
@KevinKoning-ex8ex
19 hours ago
"Jeff Dean doesn't exist, he's actually an advanced AI created by Jeff Dean."
8
Reply
@itsachyutkrishna
34 minutes ago
Great episode
Reply
@mrdbourke
21 hours ago
Banger pod drop
3
Reply
@MarkyGoldstein
17 hours ago (edited)
Google is currently not in a state of coherence. Jules is only runnable under a Google ONE account. The Google AI builder stack has fallen into 7 pieces. Big tech companies have become too large thus too difficult to coordinate. And btw. it's not better at other big tech companies, they all show similar symptoms.
4
Reply
@ckq
21 hours ago
Goat
3
Reply
@SeyedMostafaMeshkati
2 hours ago
Could you please fix the video flicker somehow?
BTW thanks for the great episode.
Reply
@ItsKingMyles
12 hours ago
Tony hawk of engineering
1
Reply
@altryne
22 hours ago
Jeff Dean!! Let's gooo
8
Reply
@GNARGNARHEAD
5 hours ago
no way
Reply
@vibhuuuus
22 hours ago
wheres the dog
3
Reply
1 reply
@dextersjab
14 hours ago
I see Jeff Dean I click. I see Latent Space I click. I see both "I want to double click".
1
Reply
@LatentSpacePod
3 hours ago
check out Gemini 3 Deep Think: https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-deep-think/
transcript and links: https://www.latent.space/p/jeffdean
Reply
Top is selected, so you'll see featured comments
In this video
Timeline
Chapters
Transcript
Intro
0:04
Hey everyone, welcome to the L in space podcast. This is Allesio, founder of Colonel Labs, and I'm joined by Swix,
0:09
editor of L in Space. Hello. Hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome.
0:14
Thanks for having me. It's a bit surreal to have you in the studio. I've I've watched so many of your talks uh and obviously uh you your
0:22
career has been super legendary. So, uh I mean, congrats. I I think the the first thing must be said congrats on
0:28
owning the Purto Frontier. Thank you. Thank you. Parto Frontiers are good and it's good to be out there.
0:34
Yeah. I mean I I think it's a combination of both uh your you have to own the Parto Frontier you have to have
0:41
like frontier capability but also efficiency and then offer that range of
0:46
models that people like to use. uh and you know some part of this was started because of your hardware work some part
0:53
of that is your model work and uh you know I'm sure there's lots of secret sauce that you guys uh have worked on uh
0:59
accumulatively but like it's it's really impressive to see it all come together in like this steadily advancing frontier.
1:05
Yeah. Yeah. I mean I think as you say it's not just one thing it's like a whole bunch of things up and down the
1:10
stack and uh you know all of those really combined to help make you an OS able to
1:16
make highly capable large models as well as you know software techniques to get those large model capabilities into much
1:23
smaller lighter weight models that are you know much more cost-effective and lower latency but still you know quite
1:29
capable for their size. So yeah, how how much pressure do you have on like having the lower bound of the prior
Frontier vs Flash & Distillation Strategy
1:36
frontier too? I think like the new labs are always trying to push the top performance frontier because they need
1:42
to raise more money and all of that. And you guys have billions of users and I think initially when you work on the CPU
1:49
you were thinking about you know if everybody that used Google we used the voice model for like 3 minutes a day they were like you need to double your
1:55
CPU number like what's that discussion today at Google like how do you
2:00
prioritize frontier versus like we actually need to deploy it if we build it. Yeah, I mean I think we always want
2:05
to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities
2:11
now exist that didn't exist at the sort of slightly less capable last year's
2:16
version or last six months ago version. Um at the same time, you know, we know there those are going to be really
2:22
useful for a bunch of use cases, but they're going to be uh a bit slower and a bit more expensive than people might
2:30
like for a bunch of other broader use cases. So I think what we want to do is always have um kind of a highly capable
2:38
uh sort of uh affordable model that enables a whole bunch of you know lower
2:44
latency use cases. People can use them for agentic coding much more readily. Um and then have the the high-end you know
2:52
frontier model that is really useful for um you know deep reasoning you know solving really complicated math problems
2:59
those kinds of things. And and it's not that one or the other is useful. They're both useful. So I think we like to do
3:06
both. And also, you know, through distillation, which is a key technique for making the smaller models more
3:12
capable, you know, you have to have the frontier model in order to then distill it into your your smaller model. So it's
3:18
not like an either or choice. You sort of need that in order to actually get a highly capable more modest size model.
3:24
Yeah. And I mean you and Jeffrey In came out with this solution in 2014. Don't forget L'Oreal Vine as well.
3:31
a long time ago. Like I'm curious how you think about the cycle of these ideas even like you know
3:37
sparse models and uh you know how do you re-evaluate them? How do you think about in the next generational model what is
3:43
worth revisiting like a yeah they're just kind of like a you know you worked on so many ideas that end up being
3:48
influential but like in the moment they might not feel that way necessarily. Yeah, I mean I I think distillation was
3:54
originally motivated because we were seeing that we had a very large image data set at the time, you know, 300
4:00
million images that we could train on with, you know, I forget like 20,000 categories or something, so much bigger
4:06
than ImageNet. And we were seeing that if you create specialists for different
4:11
subsets of those image categories, you know, this one's going to be really good at sort of mammals and this one's going
4:16
to be really good at sort of indoor room scenes or whatever. and you can cluster those categories and train on an
4:23
enriched stream of data after you do pre-training on on a much broader set of
4:29
images. You get much better performance if you then treat that whole set of maybe 50 models you've trained as a
4:35
large ensemble. Um but that's not a very practical thing to serve, right? So distillation really came about
4:42
from the idea of okay what if we want to actually serve that and train all these independent sort of expert models um and
4:50
then squish it into something that actually fits in a form factor that you can actually serve. And that's you know
4:56
not that different from what we're doing today. You know often today we're instead of having an ensemble of 50 models we're having a much larger scale
5:04
model that we then distill into a much smaller scale model.
Distillation, RL & Flash Economic Advantage
5:09
Yeah, a part of me also wonders if distillation also has a story with the
5:15
RL um revolution. So what let me let me maybe try to articulate what I mean by
5:21
that. uh which is you can uh RL basically spikes models in a certain uh part of the distribution and then you
5:29
have to sort of well you can spike models but usually sometimes it might be lossy in other areas and it's kind of
5:35
like an uneven technique but you can probably distill it back uh and you can uh I think that the sort of general um
5:43
dream is to be able to advance capabilities without regressing on anything else
5:49
and I think like that that whole capability merging without loss. Uh uh I
5:54
feel like it's like you know some part of that should be a distillation process but I can't quite articulate it. I
6:00
haven't seen much papers about it. Yeah. I mean I I tend to think of one of the key advantages of distillation is
6:06
that you can have a much smaller model and you can have a very large uh you
6:11
know training data set and you can get utility out of making many passes over that data set because you're now getting
6:18
the logits from the much larger model in order to sort of sort of coax the right behavior out of
6:23
the smaller model uh that you don't wouldn't otherwise get with just the hard labels and and so um you know I
6:30
think that's what we've observed is you can get, you know, clo very close to
6:35
your largest model performance with distillation approaches. And that that seems to be, you know, a nice sweet spot
6:41
for a lot of people because it enables us to kind of for multiple Gemini generations now, we've been able to make
6:48
the sort of flash version of the next generation as good or even substantially better
6:55
than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good uh
7:00
trend to follow. Um dare I ask uh so it was it was the original map was Flash Pro and Ultra.
7:07
Uh is ultra are you just sitting on ultra and distilling from that? Is that like the mother load? Uh I mean we have a lot of different
7:14
kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are you know
7:19
our pros scale model and we can distill from that as well into our flash scale model. So I think you know uh it's u
7:26
it's an important set of capabilities to have and also inference time scaling can
7:31
also be a useful thing to improve the capabilities of a model and yeah cool yeah and obviously I think the
Flash in Products + Importance of Latency
7:38
economy of flash is what led to the total dominance I think the the latest number is like 50 trillion uh tokens I I
7:45
don't know I mean obviously it's changing every day but uh you know by market share hopefully hopefully up
7:51
no I mean there's no I mean Just the economics wise like uh because flash is so economical like you can use it for
7:57
everything like it's in Gmail now it's in YouTube like it's it's in everything we're using it more in our search
8:03
products of various AI mode overviews. Oh my god flash parts AI mode. Oh my god. Yeah that's yeah I didn't even
8:09
think about that. Um I mean I think one of the things that is uh quite nice about the flash model
8:15
is not only is it more affordable it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because
8:21
we're going to want models to do much more complicated things that are going to involve, you know, generating many
8:27
more tokens from when you ask the model to do something until it actually finishes what you ask it to do because
8:33
you're going to ask now not just write me a for loop, but like write me a a whole software package to do X or Y or
8:40
Z. And so having low latency systems that can do that uh seems really
8:46
important. and flash is one direction, one one way of doing that. Yeah. You know, obviously our hardware
8:52
platforms enable a bunch of interesting aspects of our, you know, serving stack
8:57
as well like TPUs. Uh the interconnect between chips on the TPUs, uh is
9:03
actually quite quite high performance and quite amendable to for example long
9:08
context kind of attention operations. You know, having sparse models with lots of experts. These kinds of things really
9:15
really matter a lot in terms of how do you make them servable at scale.
9:20
Yeah. Does it feel like there's some breaking point for like the protoflash
9:25
distillation kind of like one generation delayed? I I almost think about almost like the capability asmtote in certain
9:32
tasks like the pro model today is as saturated some sort of task. Mhm.
9:37
So next generation that same task will be saturated at the flash price point and I think for most of the things that
9:44
people use models for at some point the flash model in two generation will be able to do basically everything and how
9:52
do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the
9:57
flash model? I'm curious how you think about that. I mean I think that's true if your distribution of what people are asking
10:03
people the models to do is stationary, right? But I think what often happens is as the models become more capable,
10:10
people ask them to do more, right? So I mean I think this happens in my own usage like I used to try our models a
10:18
year ago for some sort of coding task and it was okay at some simpler things
10:23
but wouldn't do work very well for more complicated things. And since then we've improved dramatically on the more on the
10:29
more complicated coding tasks and now I'll ask it to do much more complicated things. And I think that's true not just
10:34
of coding but of you know now you know can you analyze all the you know
10:40
renewable energy uh deployments in the world and give me a report on solar panel deployment or whatever. That's a
10:46
very complicated you know more complicated task than people would have asked a year ago. And so you are going to want more
10:53
capable models to push the frontier in some sense of what people ask the models
10:59
to do. And that also then gives us insight into okay where does the where
11:04
do things break down? How can we improve the model in these these particular areas uh in order to sort of um make the
11:10
next generation even better? Yeah. Are there any benchmarks or like test sets that you use internally?
Benchmarks, Long Context & Real Use Cases
11:15
Because it's almost like the same benchmarks get reported every time and it's like all right it's like 99 instead of 97. Like how do you have to keep
11:22
pushing the team internally too to like this is what we're building towards? Yeah. I mean, I think benchmarks,
11:28
particularly external ones that are publicly available, have their utility, but they often kind of have a lifespan
11:35
of utility where they're introduced and maybe they're quite hard for current models. You know, I I like to think of
11:42
the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30% maybe, but not higher. And
11:50
then you can sort of work on improving that capability for uh whatever it is the benchmark is trying to assess and
11:57
get it up to like 80 90% whatever. I I think once it hits kind of 95% or
12:03
something you get very diminishing returns from really focusing on that benchmark because it's sort of it's
12:08
either the case that you've now achieved that capability or there's also the issue of leakage in public data or very
12:15
related kind of data being being in your training data. Um, so we have a bunch of held out internal benchmarks that we
12:22
really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the
12:28
model to have um that it doesn't have now and then we can work on, you know,
12:34
assessing, you know, how do we make the model better at these kinds of things? Is it we need different kind of data to
12:39
train on that's more specialized for this particular kind of task? Do we need um you know a bunch of uh you know
12:46
architectural improvements or some sort of uh model capability improvements? You know what would help make that better?
12:53
Is there is there such an example that you uh a benchmark inspired an architectural improvement? like uh I'm
13:00
just kind of jumping on that because you just uh I mean I think some of the long context capabilities of the of the
13:07
Gemini models that came I guess first in 1.5 really were about looking at okay we
13:13
want to have um you know immediately everyone jumped to like completely green charts of like everyone
13:19
had I was like how did everyone crack this at the same time like right yeah I mean I think um and once
13:26
you're set I mean as you say that needle single needle in a haststack benchmark is really saturated for at least context
13:34
lengths up to 128k or something. I think most people don't actually have you know much larger than 128k these days or 256 or
13:41
something. Um you know we're trying to push the frontier of 1 million or 2 million context language. I think Google's still the leader 2
13:46
million. Yep. which is good because I think there are a lot of use cases where you know putting a thousand pages of text or
13:53
putting you know multiple you know hourlong videos in the context and then actually being able to make use of that
13:59
is useful but the the single needle in a haststack benchmark is sort of
14:06
saturated. Um so you really want more complicated uh sort of multi- needle or
14:12
you know more realistic take all this content and produce this kind of answer
14:18
from uh uh a long context that sort of better assesses what it is people really
14:24
want to do with long context which is not just you know can you tell me the product number for this particular
14:30
thing. Yeah it's retrieval it's it's retrieval within machine learning. Uh yeah, it's it's interesting because like I think
14:37
that the more meta lesson level I'm trying to operate at here is uh you have a benchmark you're like okay I see the
14:42
architectural thing I need to do in order to go fix that but like should you do it because sometimes you know that's
14:48
an inductive bias basically that you're Jason we used to work at Google would say like exactly the kind of thing like
14:54
yeah you're going to win short term longer term I don't know if that's going to scale you might have to undo that
Attending to Trillions of Tokens & Multimodality
15:01
I mean I I I like to sort of not focus on exactly what solution one should
15:07
drive but what capability would you want and I think we're very convinced that
15:12
you know long context is useful but it's way too short today right like I think what you would really
15:18
want is can I attend to the internet while I answer my question right but that's not going to be solved by
15:26
purely scaling the existing solutions which are quadratic so a million tokens kind of pushes
15:32
uh what you can do you're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. Um, but I think if you could
15:40
give the illusion that you can attend to trillions of tokens, that would be amazing. You'd be find all kinds of uses
15:45
for that. You would have um attend to the internet. you could attend to the
15:51
pixels of YouTube and the sort of deeper representations that we can form for a
15:56
single video, but across many videos, you know, uh on a personal Gemini level,
16:01
you could attend to all of your personal state with your permission. So like your emails, your photos, your
16:08
yeah, your docs, your plane tickets you have. Um I I think that would be really really
16:13
useful. And the question is, how do you get algorithmic improvements and system
16:19
level improvements that get you to something where you actually can attend to trillions of tokens in some
16:25
meaningful way? Yeah. But by the way, I think I I did some math and if like if you spoke all day every day for eight hours a day, um
16:32
you only generate a maximum of like 100k tokens, which like very comfortably fits,
16:38
right? But if you then say okay I want to be able to um understand everything
16:44
people are putting on video. Exactly. Exactly. Well also I think that the classic example is um you start going beyond language into like proteins
16:51
and whatever else is extremely information dense. Yeah. Yeah. I mean, I think one of the things
16:57
about Gemini's multimodal aspects is we've always wanted it to be multimodal from the
17:03
start. And so, you know, that sometimes to people means
17:09
text and images and video sort of humanlike and audio audio humanike
17:15
modalities. But I think it's also really useful to have Gemini know about nonhuman modalities. like LAR sensor
17:22
data from say Whimo vehicles or like robots or you know various kinds of
17:29
health modalities, X-rays and MRIs and imaging and genomics information. Um and
17:34
I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed
17:40
to the fact that this is an interesting modality and has certain meaning in the world. uh where even if you haven't
17:46
trained on all the LAR data or MRI data you could have because maybe that's not
17:53
you know doesn't make sense in terms of trade-offs of you know what you include in your main pre-training data mix at
17:58
least including a little bit of it is actually quite useful because it sort of uh tempts the model that this is a
18:04
thing. Yeah. Yeah. Do do you believe I mean since we're on this topic and something I just get to ask you all the questions
18:09
I always wanted to ask which is fantastic. uh like there are there some king modalities like modalities that supersede all the other modalities. So
18:15
the a simple example was vision um can on a pixel level encode text and deepc
18:22
had this deepr paper that did that. Uh vision has also been shown to maybe incorporate audio because you can do
18:29
audio spectrograms and that's that's also like a vision uh capable thing like so so maybe vision is just the king
18:34
modality and like yeah I mean vision and motion are quite important things right
18:40
motion uh video as opposed to static images because I mean there's a reason
18:45
evolution has evolved eyes like 23 independent ways because it's such a useful capability for sensing the world
18:52
around you which is really what we want these models to be able to do is interpret the things we're seeing or the
18:58
things we're we're paying attention to and then help us in uh using that information to to do things.
19:05
Yeah, I I think motion uh you know I still want to shout out I think Gemini uh still the only native video
19:11
understanding model that is out there. Uh so I use it for YouTube all the time. Yeah. Yeah. I mean, it's actually I
19:18
think people kind of are not necessarily aware of what the Gemini models can
19:24
actually do with video. Like, uh, I have an example I've used in one of my talks. It had like, uh, it was like a YouTube
19:31
highlight video of 18 memorable sports moments across the last 20 years or
19:37
something. So, it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer
19:42
uh, goals and things like that. And you can literally just give it the video and say, "Can you please make me a table of
19:48
what all these different events are, what when the date is, when they happened, and a short description of the
19:55
event." And so you get like now an 18 row table of that information extracted
20:01
from the video, which is, you know, not something most people think of as like a
20:06
turn video into SQL like table. Yeah. Has there been any discussion
LLM Search & Google Search Evolution
20:13
inside of Google of like you mentioned tending to the whole internet? Right. Google it's almost built because the a
20:19
human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep.
20:25
That ranking is like much different for an LLM because you you can expect a person to look at maybe the first five
20:31
six links in a Google search versus for an LLM should you expect to have 20 links that are highly relevant?
20:38
like how do you internally figure out you know how do we build the AI mode that is like maybe like much broader
20:44
search and span versus like the more human one. Yeah. I mean I think even pre- language
20:50
model based work you know our ranking systems would be built to start with a
20:56
giant number of web pages in our index. Many of them are not relevant. So you identify a subset of them that are
21:02
relevant with very lightweight kinds of methods. Now you're down to like 30,000
21:07
documents or something. And then you have gradually refine that to apply more
21:12
and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get
21:19
down to ultimately what you show which is you know the final 10 results or you know 10 results plus other kinds of
21:26
information. And I think an LLM based system is not going to be that
21:31
dissimilar, right? you're going to tend to trillions of tokens, but you're going to want to identify, you know, what are
21:36
the 30,000ish documents that with the, you know, uh, maybe
21:43
30 million interesting tokens and then how do you go from that into what are
21:48
the 117 documents I really should be paying attention to in order to carry out the task that the user has asked me
21:55
to do. Um and I think you know you can imag you can imagine systems where you
22:01
have you know a lot of uh highly parallel processing to identify those initial
22:07
30,000 candidates maybe with very lightweight kinds of models. Um then you have some system that sort of helps you
22:13
narrow down from 30,000 to the 117 uh with maybe a little bit more
22:18
sophisticated um model uh or set of models. And then maybe the final model
22:24
is the thing that looks at 117 things. That might be your most capable model. So I think it has to it's going to be
22:30
some system like that that is really enables you to give the illusion of attending to trillions of tokens. Um
22:37
sort of the way Google search gives you you know not the illusion but you are searching the internet. Yeah.
22:43
But you're finding you know a very small subset of things that are that are relevant. Yeah. I I often tell a lot of people uh
22:50
that are not steeped in like Google search history that uh well you know like BERT was like used like basically
22:56
immediately inside of Google search uh and that improves results a lot right like I I don't I don't have any numbers
23:02
off the top of my head but like I'm sure you that's obviously the most important numbers to to Google. Yeah, I mean I I
23:08
think going to an LLMbased representation of text and words and so
23:14
on enables you to get out of the explicit hard notion of of particular
23:20
words having to be on the page, but really getting at the notion of this topic of this page or this paragraph is
23:26
highly relevant to this query. Yeah. Yeah. I I don't think people understand how much LMS have taken over all these very high traffic system. very
23:33
high traffic. Yeah, like it's Google. Uh it's YouTube. Uh YouTube has this like semantics uh ID thing
23:39
where there's like every token or every uh item in the vocab is a YouTube video or something that predicts the video
23:46
using a code book which is absurd to me for YouTube size. And then most recently Grock also for for XAI which is like
23:55
I mean I'll call out even before LLMs were used extensively in search we put a
24:00
lot of emphasis on softening the notion of what the user actually entered into the query so that
24:07
do you have like a history of like what's the yeah I mean I actually gave a talk in uh I guess uh web search and data mining
Systems Design Principles + Latency Numbers
24:14
conference in 2009. Okay. uh where we never actually published any papers about the origins
24:19
of Google search uh sort of but we went through sort of four or five or six
24:24
generations four or five or six generations of uh redesigning of the
24:29
search and retrieval system uh from about 1999 through 2004 or five and that
24:35
talk is really about that evolution and one of the things that really happened in 2001 was we were
24:43
sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger so we could
24:49
retrieve from a larger index which always helps your quality in general uh
24:54
because if you don't have the page in your index you're going to not do well. Um and then we also needed to scale our
25:01
capacity because we were our traffic was growing quite extensively. Um and so we
25:06
had you know a sharded system where you have more and more shards as the index grows. you have like 30 shards and then
25:13
if you want to double the index size you make 60 shards so that you can bound the latency by which you respond for any
25:20
particular user query. Um and then as traffic grows you add more and more replicas of each of those. And so we
25:26
eventually did the math that realized that in a data center where we had say 60 shards and um you know 20 copies of
25:34
each shard we now had 1,200 machines uh with discs. and we did the math and
25:40
we're like, hey, one copy of that index would actually fit in memory across,200 machines. Mhm.
25:45
So in 2001 we introduced uh we put our entire index in memory.
25:50
And what that enabled from a quality perspective was amazing because before you had to be really careful about, you
25:57
know, how many different terms you looked at for a query because every one of them would involve a disk seek on
26:03
every one of the 60 shards. And so you as you make your index bigger, that becomes even more inefficient.
26:11
But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the
26:16
user's original three or four word query because now you can add synonyms like restaurant and restaurants and cafe and
26:24
uh beastro and all these things. And you can suddenly start uh sort of really
26:30
uh getting at the meaning of the word as opposed to the exact semantic form. the
26:35
user typed in. And that was, you know, 2001, very much preLLM, but really it
26:41
was about softening the the strict definition of what the user typed in order to get at the meaning.
26:47
What are like principles that you use to like design the systems, especially when you have I mean in 2001 the internet is
26:53
like doubling tripling every year in size. It's not like a you know, and I think today you kind of see that with
26:59
LLMs too where like every year the jumps in size and like capabilities are just so big. Are there just any you know
27:05
principles that you use to like think about this? Yeah, I mean I think uh you know first
27:11
whenever you're designing a system you want to understand what are the sort of design parameters that are going to be
27:17
most important in deciding that you know so you know how many queries per second do you need to handle? How big is the
27:23
index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you
27:30
retrieve things? um what happens if traffic were to double or triple you
27:36
know will that system work well and I think a good design principle is you're want to design a system so that the most
27:42
important characteristics could scale by like factors of five or 10 but probably
27:47
not beyond that because often what happens is if you design a system for X and something suddenly
27:54
becomes 100X that would enable a very different point in the design space that would not make sense at X but all of a
28:00
sudden 100x makes total sense. So like going from a disk spaced index to a in-memory index makes a lot of sense
28:08
once you have enough traffic because now you have enough replicas of the sort of
28:13
state on disk that those machines now actually can hold uh you know a full
28:19
copy of the me uh index in memory. Yeah. And that all of a sudden enables a completely different design that
28:25
wouldn't have been practical before. Yeah. Um, so I'm I'm a big fan of
28:30
thinking through designs in your head, just kind of playing with the design space a little before you actually do a
28:37
lot of writing of code. But you know, as you said, in the early days of Google, we were you growing the index uh quite
28:45
extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed
28:51
the most surprisingly. So it used to be once a month. Yeah. And then we went to a system that
28:58
could update any particular page in like sub one minute. Okay. Yeah. Because this is a competitive advantage, right?
29:05
Because all of a sudden news related queries, you know, if you're if you've got last month's news index, it's not actually that useful for
29:11
a special beast. Was there any like you could have split it onto a separate system? Well, we did we launched a Google News
29:17
product, but you also want news related queries that people type into the main index to also be
29:22
sort of updated. So, yeah. Yeah. It's interesting. And then you have to like classify whether the page is you have to decide which pages
29:29
should be updated at what frequency. Oh yeah, there's a whole like uh system behind the scenes that's trying to
29:34
decide update rates and importance of the pages. So even if the update rate seems low, you might still want to rec
29:40
crawl important pages quite often because uh the likelihood they change might be
29:46
low but the value of having them updated is high. Yeah. Yeah. Yeah. Yeah. uh what you know
29:53
this uh you know mention of latency and and saving things to this reminds me of one of your classics which I have to
29:58
bring up which is latency numbers every programmer should know. Uh was there was there just a just general
30:04
story behind that did you just write it down? I mean this has like sort of eight or 10 different kinds of metrics that are like
30:10
how long does a cache miss take, how long does branch miss predict take, how long does a reference domain memory take, how long does a distance take
30:16
these how long does it take to send you know a packet from the US to the Netherlands or something. Um,
30:22
why Netherlands by the way or is it is that because of Chrome? Uh, we had a data center in
30:29
um so I mean I think this gets to the point of being able to do these back at the envelope calculations. So these are
30:35
sort of the raw ingredients of those and you can use them to say okay well if I
30:40
need to design a system to do image search and thumbnailing or something of
30:45
the result page you know how might I do that? I could premp compute the image thumbnails. I could like try to
30:52
thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth I need? How many disc
30:58
seeks would I do? Um and you can sort of actually do thought experiments in you
31:04
know 30 seconds or a minute with the sort of uh basic uh basic numbers at your fingertips. Uh and then as you sort
31:11
of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it
31:18
take to you know look up something in this particular kind of hash table I use or you know how long will it take me to
31:24
sort a million numbers or something. Yeah. The the reason I bring it up actually is actually for I think like
31:30
two years now I've been trying to make numbers every AI programmer should know. Okay. Yeah. Uh I don't have a great one. uh because
31:37
it's not as it's not physical constants like you have physical constants in here you know it's and
31:43
uh but I do think like uh so a simple one would be number of parameters to um
31:49
uh disk size if you if you need to convert that uh which is a simple bite conversion that's not that's nothing
31:54
interesting I wonder if you have any if you want if you if you were to update your I mean I think uh it's really good to
32:02
think about uh calculations you're doing in a model either for training or
32:07
inference. Um, often a good way to view that is how
Energy, Batching & TPU Co-Design
32:14
much uh state will you need to bring in from memory either like onchip SRAMM or
32:21
HPM from the accelerator attached uh memory or DRAM or over the network. Um
32:27
and then how expensive is that data motion relative to uh the cost of say an
32:35
actual multiply in the matrix multiply unit and that cost is actually really really low right because it's you know order
32:43
you know uh depending on your precision I think it's like sub pico one picole
32:50
oh okay you measure it by energy yeah yeah I mean it's all going to be about energy and how do you make the most energy efficient
32:57
Um, and then moving data from the SRAMM on the other side of the chip, not not
33:03
even off the off chip, but on the other side of the same chip can be, you know, a thousand pajles.
33:09
Oh. Or Yeah. And so all of a sudden this is why your accelerators uh require
33:16
batching because if you move like say the parameter of a model from SRAMM on
33:22
the on the chip into the multiplier unit that's going to cost you a thousand pico tools. So you better make use of that
33:28
that thing that you moved many many times with. So that's where the batch dimension comes in because all of a
33:34
sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's
33:39
really not good. Yeah. Yeah. Right. Because then you paid a thousand podles in order to do your one pico
33:46
multiply. I have never heard a energy based analysis of batching. Yeah. I mean, that's why people batch,
33:52
right? Yeah, ideally you'd like to use batch size one because the latency would be great but the energy cost and the the compute
33:59
cost inefficiency that you get um is is quite large. So yeah is there a similar trick like uh
34:06
like like you did with uh you know putting everything in memory like you know I think uh obviously Nvidia has
34:11
caused a lot of waves with uh betting very hard on on SRAMM with grock. Uh I I I wonder if like that's something that
34:17
you already saw with with the TPUs, right? Like that that you had to uh to
34:22
serve at your scale. Uh you probably sort of saw that coming like what what what hardware uh innovations or insights
34:31
were formed because of what you're seeing there. Yeah. I mean, I think you know, TPUs have this nice uh sort of regular
34:38
structure of 2D or 3D meshes with a bunch of chips connected and each one of
34:43
those has HPM attached. Um I think for serving some kinds of models,
34:49
uh you know, you you pay a lot higher cost and time latency
34:54
um bringing things in from HBM than you do bringing them in from uh SRAMM on the
35:00
chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of
35:06
chips, and you actually get quite good throughput improvements and latency improvements from doing that. And so
35:12
you're now sort of striping your smalish scale model over say 16 or 64 chips. Uh
35:20
but if if you do that and it all fits in SRAMM, uh that can be a big win. So yeah, that's not a surprise, but it is a
35:26
good technique. Yeah. What about the TPU? design like how much do you decide where the
35:33
improvements have to go? So like this is like a good example of like is there a way to bring the thousand pig jewel down
35:39
jewels down to 50 and like is it worth designing a new chip to do that? The
35:45
extreme is like when people say oh you should burn the model on the ASIC and that's kind of like the most extreme thing.
35:50
How much of it is it worth doing in hardware when things change so quickly? Like what what's the internal
35:56
discussion? Yeah, I mean we we have a lot of interaction between say the TPU chip design architecture team and the
36:04
sort of higher level modeling uh experts because we really want to take advantage of being able to co-design what should
36:11
future TPUs look like based on where we think the sort of ML research puck is
36:16
going uh in some sense because uh you know as a hardware designer for ML in
36:21
particular you're trying to design a chip starting today and that design
36:27
might take two years before it even lands in a data center and then it has to sort of be a reasonable lifetime of
36:34
the chip to take you three, four or five years. So you're trying to predict two
36:39
to six years out where what ML computations will people want to run two
36:45
to six years out in a very fast changing field. And so having people with
36:51
interesting ML research ideas of things we think will start to work in that time
36:56
frame or will be more important in that time frame. Uh really enables us to then get you know interesting hardware
37:03
features put into you know TPU N plus2 where TPUn is what we have today.
37:10
Oh the cycle time is plus two roughly. I mean because uh
37:15
I mean sometimes you can squeeze some changes into N plus1 but you know bigger changes are going to require the chip
37:21
design be earlier in its lifetime design process. Um, so whenever we can do that,
37:28
it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but
37:34
if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you
37:39
burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's
37:45
a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of careful ML
37:53
experimentation to show us uh this is actually the the way we want to go. Yeah.
37:58
Is there a reverse of like we already committed to this chip design so we cannot take the model architecture that
38:04
way because it doesn't quite fit? Yeah. Yeah, I mean you you definitely have things where you're going to adapt
38:11
what the model architecture looks like so that they're efficient on the chips that you're going to have for both
38:18
training and inference of that of that uh generation of model. So I think it
38:25
kind of goes both ways. Um you know sometimes you can take advantage of you know lower precision things that are
38:32
coming in a future generation. So you might train it at that lower precision
38:37
even if the current generation doesn't quite uh do that. Mhm. Yeah. How low can we go in
38:42
precision? People are saying like turner is like Yeah. I mean I'm a big fan of very low
38:49
precision because I think that gets that saves you a tremendous amount of energy, right? Because it's poujles per bit that
38:54
you're transferring and reducing the number of bits is a really good way to to reduce that. Um, you know, I think
39:00
people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having
39:07
scaling vectors that apply to a whole bunch of, uh, those those weights
39:13
scaling. Okay. Interesting. You so low precision but scaled up weights.
39:18
Yeah. Huh. Yeah. Never considered that. Interesting. Uh while we're on this topic, you know, I think there's a lot
39:24
of um uh just the concept of precision at all is weird when we're sampling, you
39:30
know, uh we just at the end of this we're going to have all these like chips that all do like very good math and then
39:36
we're just going to throw a random number generator at the start and so I mean I there's a movement towards
39:41
energy based uh models and pro processors. I'm just curious if you've
39:46
obviously you've thought about it but like what's your commentary? Yeah, I mean I think there's a bunch of
39:52
interesting trends. So energy based models is one. You know, diffusion based models which don't sort of sequentially
39:58
decode tokens is another. Yes. Um, you know, speculative decoding is a way that you can get sort of an
40:04
equivalent very small draft batch factor uh for like you predict
40:11
eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of
40:16
eight even and then you maybe accept five or six of those tokens. So you get
40:21
five a 5x improvement in the amortization of moving weights uh into
40:27
the multipliers to do the prediction for the the tokens. So these are all really
40:33
good techniques and I think it's really good to look at them from the lens of uh
40:39
energy real energy not energy based models um and and also latency and
40:45
throughput right if you look at things from that lens that sort of guides you
40:50
to solutions that are going to be uh you know better from uh you know being able
40:56
to serve larger models or you know equivalent size models more cheaply and
41:02
with lower latency. Yeah. Well, I think I think I um it's
41:07
appealing intellectually. Uh haven't seen it like really hit the mainstream, but um I do think that uh there's some
41:14
poetry in the sense that uh you know, we don't have to do uh a lot of shenanigans
41:19
if like we fundamentally design it into the hardware. Yeah. Yeah. I mean, I think there's
41:25
still a there's also sort of the more exotic things like analog based uh uh
41:31
computing substrates as opposed to digital ones. Uh I'm, you know, I think those are super interesting because they
41:36
can be potentially low power. Uh but I think you often end up wanting to interface that with digital systems
41:42
and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions you end up
41:48
doing uh at the sort of boundaries and periphery of that system. M um I still think there's a tremendous
41:55
distance we can go from where we are today in terms of energy efficiency with
42:00
sort of uh much better and specialized hardware for the models we care about.
42:05
Yeah. Um any other interesting research ideas that you've seen or like maybe things
42:11
that you cannot pursue at Google that you would be interested in seeing researchers take a stab at? I guess you
42:17
have a lot of researchers. Yeah, we have a lot of our our research portfolio is pretty broad. I would say um I mean I
Research Frontiers: Reliability & RL Challenges
42:25
think uh in terms of research directions, there's a whole bunch of uh
42:30
you know open problems and how do you make these models reliable and able to do much longer kind of uh more complex
42:38
tasks that have lots of subtasks? How do you orchestrate you know maybe one model
42:43
that's using other models as tools in order to sort of build uh things that can accomplish uh you know much more
42:50
significant pieces of work uh collectively than you would ask a single model to do. Um so that's super
42:57
interesting. How do you get more verifiable uh you know how do you get RL
43:02
to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would
43:08
broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh if we
43:15
could apply those to other less verifiable domains because we've come up with RL techniques that actually enable
43:20
us to do that uh effectively that would that would really make the models improve quite a lot. I think
43:27
I'm curious like when we had no brown on the podcast, he said um they already proved you can do it with deep research.
43:33
Mhm. Um, you kind of have it with AI mode in a way. It's not verifiable.
43:39
I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information
43:45
retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part that you can score or
43:52
what are like yeah how how would you model that that problem? Yeah, I mean I think there are ways of
43:58
having other models that can evaluate the results of what a first model did.
44:04
Maybe in retrieving, can you have another model that says, is this things are these things you retrieved relevant
44:10
or can you rate these 2,00 things you retrieved to assess which ones are the
44:16
50 most relevant or something. Um, I think those kinds of techniques are actually quite effective. Sometimes that
44:21
can even be the same model just prompted differently to be a you know critic as opposed to a uh actual retrieval system.
44:28
Yeah. Um, I do think like there there is that that weird cliff where like it
44:35
feels like we've done the easy stuff and then now it's but it always feels like that like every year it's like oh like
44:40
we know you know and the next part is super hard and nobody's figured it out and uh like exactly with this RLVR thing
44:48
where like everyone's talking about well okay how do we do the next stage of the non-verifiable stuff and everyone's like
44:54
I don't know you know judge I mean I feel like The nice thing about
44:59
this field is there's lots and lots of smart people thinking about creative solutions to some of the, you know,
45:05
problems that we all see. Uh because I think everyone sort of sees that the models, you know, are great at some
45:11
things and they fall down around the edges of those things and and are not as capable as we'd like in those areas. And
45:17
then coming up with good techniques and trying those and seeing which ones actually make a difference is sort of
45:23
what the whole research aspect of this field is is pushing forward. And I think that's why it's super interesting. You
45:29
know, if you think back two years ago, we were struggling with GSM8K problems, right? Like, you know, Fred has two
45:36
rabbits, he gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds
45:42
of mathematics that the models can. And now you're doing Yeah. And erosure language. Yeah.
45:49
Yeah. Pure language. So that is a really really amazing jump in
45:55
capabilities in you know a year and a half or something. And I think um
46:01
for other areas it'd be great if we could make that kind of leap. Uh and you know we don't exactly see how to do it
46:08
for some some areas but we do see it for some other areas and we're going to work hard on making that better.
46:13
Yeah. Yeah. Like YouTube thumbnail generation that would be very helpful. We need that. That would be AGI. we need
46:20
for as far as content creators go. I guess I'm not a YouTube creator, so I don't care that much about that problem,
46:26
but I guess uh many people do. It Yeah, it doesn't it doesn't matter. People do judge books by their covers as
Unified Models vs Symbolic Systems (IMO)
46:31
it turns out. Um just to draw a bit on the IMO gold. Um I'm still not over the fact that a
46:37
year ago we had Alpha Proof and Alpha Geometry and all those things and then this year we were like screw that, we'll
46:44
just chuck it into Gemini. What's your reflection? Like I think this this
46:49
question about like the merger of like symbolic systems and like and and LLMs
46:54
uh was a very much core belief and then somewhere along the line people just said nope we'll just all do it in LLM.
47:02
Yeah. I mean, I think it makes a lot of sense to me because, you know, humans
47:09
manipulate symbols, but we probably don't have like a symbolic representation in our heads, right? We have some distributed
47:15
representation that is neural netlike in some way of lots of different neurons and activation patterns firing when we
47:23
see certain things and that enables us to reason and plan and, you know, do
47:28
chains of thought and, you know, roll them back. you know that that approach for solving the problem doesn't seem
47:34
like it's going to work. I'm going to try this one. And you know, in a lot of ways, we're emulating what we
47:39
intuitively think uh is happening inside real brains in neural netbased models.
47:45
So it never made sense to me to have like completely separate
47:50
discrete uh symbolic things and then a completely different way of of uh you
47:57
know thinking about those things. Interesting. Yeah. Uh I mean it's maybe
48:03
seems obvious to you but it wasn't obvious to me a year ago. Yeah. I mean I do think like that
48:08
IMO with you know translating to lean and using lean and then the next year
48:14
and and also a specialized geometry model and then this year switching to a single unified model that is roughly the
48:22
production model with a little bit more inference budget uh is actually you know quite good because it shows you that the
48:29
capabilities of that general model yeah have improved dramatically and and now you don't need these specialized models.
48:34
This is actually sort of very similar to the 2013 to6 era of machine learning,
48:41
right? Like it used to be people would train separate models for lots of different each different problem, right? I have I want to recognize street signs
48:49
in something. So I train a street sign recogn recognition model or I want to you know decode speech recognition. I
48:55
have a speech model. Right? I think now the era of unified models that do
49:00
everything is really upon us and the question is how well do those models
49:06
generalize to new things they've never been asked to do and they're getting better and better and you don't need domain experts like
49:12
one of my uh so I interviewed Eay who was on who's on that team uh and he was like yeah I I don't know how they work I don't know where the IMO
49:19
competition was held I don't know the rules of it I just train the models I'm good at training models
49:25
and it's kind of interesting thing that like people with these this like universal skill set of just like machine learning you just give them data and
49:32
give them enough compute and they can kind of tackle any task which is yeah right and
49:37
a bitter lesson I guess I don't know yeah yeah I mean I think uh general models uh will win out over specialized
49:44
ones in most cases so I want to push there a bit I think there's one hole here which is like uh
49:50
there's this concept of like uh maybe capacity of a model like abstractly a model can only contain the number of
49:56
bits that it has and uh and so you know god knows like Gemini Pro is
50:03
like one to 10 trillion parameters we don't know but uh the Gemma models for example right like a lot of people want
50:09
like the open source local models that are like that that that and and uh they
50:15
have some knowledge which is not necessary right like they can't know everything like like you have the luxury
50:20
of you have the big model and big model should be able to capable of everything but like when when you're distilling and
50:26
you're going down to the small models, you know, you're actually memorizing things that are not useful and so like how do we I guess do we want
50:33
to extract that? Can we can we divorce knowledge from reasoning, you know?
Knowledge vs Reasoning + Vertical/Modular Models
50:39
Yeah. I mean, I think you do want the model to be most effective at reasoning
50:44
if it can retrieve things, right? having the model devote precious parameter space to remember obscure
50:51
facts that could be looked up is actually not the best use of that parameter space right like you might
50:57
prefer something that is more generally useful in more settings than this
51:02
obscure fact that it has um so I think that's always a tension at the same time you also don't want your model to be
51:10
kind of completely detached from you know knowing stuff about the world right
51:15
like it's probably useful to know how long the Golden Gate Bridge is just as a
51:20
general sense of like how long are bridges, right? And uh it should have that kind of knowledge. It maybe doesn't
51:27
need to know how long some teeny little bridge in some other more obscure part of the world is, but uh it does help it
51:35
to have a fair bit of world knowledge. And the bigger your model is, the more you can have. Uh but I do think
51:41
combining retrieval with sort of reasoning and making the model really good at doing multiple stages of
51:49
retrieval and reasoning through the intermediate retrieval results is going to be a a pretty effective way of making
51:55
the models seem much more capable because if you think about say a personal Gemini
52:00
Yeah. Right? Like we're not going to train Gemini on my email. Probably we'd rather have a single model that uh we
52:08
can then use and use being able to retrieve from my email as a tool and have the model reason about it and
52:14
retrieve from my photos or whatever. Uh and then make use of that and have multiple u you know stages of
52:22
interaction. That makes sense. Do you think the vertical models are like an interesting
52:28
pursuit? Like when people are like, "Oh, we're building the best healthcare LLM. We're building the best law LLM." Are
52:34
those kind of like short-term stop caps or No, I mean I think I think vertical
52:39
models are interesting like you want them to start from a pretty good base model, but then you can sort of I sort
52:46
of viewing them view them as enriching the data distribution for that
52:51
particular vertical domain for healthcare. say um we're probably not going to train or for say robotics,
52:58
we're probably not going to train Gemini on all possible robotics data. We you could train it on because we wanted to
53:05
have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a
53:10
really, really good robotics model, you're going to want to start with that and then train it on more robotics data
53:17
and then maybe that would hurt its multilingual translation capability but improve its robotics capabilities. And
53:24
we're always making these kind of uh, you know, tradeoffs in the data mix that
53:29
we train the base Gemini models on. You know, we'd love to include data from 200
53:34
more languages and as much data as we have for those languages. Yeah. But that's going to displace some other
53:41
capabilities of the model. It won't be as good at um you know, Pearl programming. you know, it'll still
53:47
be good at Python programming because we'll include enough of that, but there's other longtail computer
53:52
languages or coding capabilities that it may suffer on or multi- uh multimodal
53:58
reasoning capabilities may suffer because we didn't get to expose it to as much data there, but it's really good at
54:03
multilingual things. So, I I think some combination of specialized models, maybe
54:09
more modular models. So it'd be nice to have the capability to have those 200
54:14
languages plus this awesome robotics model plus this awesome healthcare uh module that all can be knitted together
54:22
to work in concert and called upon in different circumstances, right? Like if I have a health related thing, then it
54:28
should enable using this health module in conjunction with the main base model
54:33
to be even better at those kinds of things. Yeah. Installable knowledge. Yeah. Right. just download as a as a
54:39
and some of that installable stuff can come from retrieval, but some of it probably should come from
54:45
training on you know uh 100 billion tokens or a trillion tokens of health data.
54:50
Yeah. And for listeners I think uh I will highlight the GEMA 3 end paper where they there was a little bit of that I think.
54:56
Yeah. Yeah. I guess the question is like how many billions of tokens do you need to
55:01
outpace the frontier model improvements? You know, it's like if I
55:06
have to make this model better at healthcare and the main Gemini model is still improving,
55:11
do I need 50 billion tokens? Can I do it with 100? If I need a trillion healthcare tokens, it's like they're
55:18
probably not out there that you don't have, you know, I think that's really like the challenge. Oh, I mean, I think healthcare is a
55:23
particularly challenging domain. So there's a lot of healthcare data that you know we don't have access to appropriately but there's a lot of you
55:30
know uh healthcare organizations that want to train models on their own data that is not public healthcare data uh
55:39
not public health but public healthare data. Um, so I think there are
55:44
opportunities there to say partner with a large healthcare organization and train models for their use that are
55:51
going to be, you know, more bespoke but probably uh might be better than a
55:56
general model trained on say public data. Yeah. Yeah. I I believe uh by the way al this
Multilingual + Low-Resource Language Insights
56:01
is like somewhat related to the language conversation. Uh I think one of your your favorite examples was you can put a
56:06
low resource language in the context and it just learns in context. Oh yeah. I think the example we used was
56:12
Calamong which is truly low resource because it's only spoken by I think 120 people in the world and there's no
56:18
written text. So so you can just do it that way just to get in the context.
56:24
Yeah. Yeah. But I put your whole data set in context, right? If you if you take a language like uh
56:29
you know Somali or something there is a fair bit of Somali text in the world that uh or Ethiopian Amharic or
56:36
something um you know we probably are not putting all the data from those
56:42
languages into the Gemini based training. We put some of it but if you put more of it you'll improve the
56:47
capabilities of those models. Yeah. So or of those languages. Uh yeah cool. Uh it's uh the I I have a
56:57
side interest in linguistics. I I I I did a few classes in back in college and like uh part of me like if I was a
57:04
linguist and I could have access to all these models I would just be asking really fundamental questions about language itself like uh one is there's
57:11
one very obvious one which is superior warf like how much does like the language that you speak affect your thinking but then also there are some
57:18
languages where there's just concepts that are not represented in other languages but some others many others that are just duplicates right where uh
57:24
there's also another paper that people love called the platonic representation where you know like the the an image of
57:30
a cup is uh if you say learn a model on that and you you you have a lot of text
57:36
with the word cup it eventually maps to like roughly the same place in laten space and so like that should apply to
57:42
languages except where it doesn't and that's actually like very interesting differences in what humanity has
57:49
discovered as concepts that maybe English doesn't have.
57:55
I I don't know. That's just like my my rant on languages. Yeah, I I did some work on a early model
Vision-Language Representations Example
58:01
that fused together a languagebased model with you have, you know, nice word-based representations and then an
58:08
image model where you have trained it on imageet like things. Yes. And then you fuse together the top
58:14
layers of uh no this is device device. Uh the you do a little bit more training to
58:22
fuse together those representations. And what you found was that if you give a novel image that is not in any of the
58:28
categories in the image model it was trained on the model can often assign
58:33
kind of the right c the right label to that image. Um so for example um I think
58:40
uh telescope and uh binoculars were both
58:45
in the training uh categories for the image model but um microscope was not.
58:51
M. And so if you give it an image of a microscope, it actually can come up with something that's got the word microscope
58:56
as the label that are designed even though it's never actually seen an image labeled that.
59:02
Oh, that's nice. Yeah. Um, so yeah,
59:07
useful. Uh, cool. I think there there's more general like broad questions, but
59:12
like I guess what what do you uh wish you were asked more in in in general? Like you know you have such a broad
59:18
scope. We've covered the hardware. covered the the models research. Yeah, I mean I think uh
59:25
one thing that's kind of interesting is you know I I did a undergrad thesis on
59:31
neural network uh training uh parallel neural network training uh back in 1990 when I got exposed to to neural nets and
59:37
I always felt kind of they were the right abstraction but we just needed way more compute than we had then. So like
59:43
the 32 processors in the department parallel computer you know could get you a a little bit more interesting uh model
59:50
but not not enough to solve real problems. And so starting in 2008 or nine, you know, the world started to
59:57
have enough computing power through Moors law and, you know, larger interesting data sets to train on to
1:00:03
actually, you know, start training neural nets that could tackle real problems that people cared about, speech
1:00:09
recognition, vision, and eventually language. Um and so um when I started working on
1:00:17
neural nets at Google in in late 2011 um you know I really just felt like we should scale up the size of neural
1:00:24
networks we can train using you know large amounts of parallel computation and so I actually revived some ideas for
1:00:31
my undergrad thesis where I'd done both model parallel and data parallel uh
1:00:36
training and I compared them. I called them something different. It was like pattern partitioned and you
1:00:42
know model partitioned or something. We'll have to Is it is it public? Can we go dig? Yeah, it's on it's on the web. Okay. Um but uh you know I think combining a
1:00:50
lot of those techniques and really just trying to push on scaling things up over the last you know 15 years has been you
1:00:56
know really important and that means you know improvements in the hardware. So
1:01:01
you know pushing on building specialized hardware like TPUs. Uh it also means you know pushing on software abstraction
1:01:08
layers to let people express ML ideas uh effectively. Um and then also working on
1:01:16
things like uh say sparse models. I've felt for a long time that sort of
1:01:21
sparsely activated models are a really important thing because you want the models to have a lot of capacity to our
1:01:27
earlier discussion about remembering a lot of stuff. Yeah. But you also want to be super efficient
1:01:32
in how you activate your models. So you'd like you know trillions of parameters but activate only you know
1:01:39
one 1% or 5% or 10% of that and um that
1:01:44
you know we did a early u paper on this where we really scaled up uh you know
1:01:50
outrageously large neural networks that the title I think that's Nome's uh Gnome's wording in the title which is a
1:01:56
good catchy title. I mean in 2017 he was out there talking about one trillion parameter models.
1:02:02
Yeah. So I mean that that that is really good because that gave you like a 10x improvement in you know time to quality
1:02:09
or compute cost to qual a given quality level relative to non-sparse models. Um
1:02:15
transformers similarly gave you a 10x to 100x improvement in you know uh compute
1:02:21
cost to a given quality level uh versus say LSTMs at the time and all of those
1:02:26
things multiply together. Um so I think all those things really are important to
1:02:32
work on you know the hardware the systems infrastructure the you know algorithmic aspects of model
1:02:38
architecture improving the data you know improving the RL recipes all these
1:02:43
things uh are what are stacking together and multiplying together to give us models of 2026
1:02:51
are much more better than models of 25 and are awesomely better than 24 and 23
1:02:57
and and and a huge uh honestly like organizational challenge like there's like a thousand people or maybe more
1:03:04
like I know I know when the first Gemini paper came out it was like a thousand of co-authors. Yeah. Yeah. We have uh 10 pages of
1:03:10
co-authors in the in the tech report but it was nice. I mean you know people want to be acknowledged on probably a
1:03:16
historical paper. Yeah. I mean, I think it's perfectly good to have actually a lot of co-authors and I do think
1:03:22
organizing that number of people so that they're effectively pushing in common
1:03:27
directions that all all their work actually sort of multiplies together in
1:03:34
the ultimate output which is you know the next generation of model is actually pretty tricky and we have awesome people
1:03:40
uh throughout the Gemini team to help orchestrate this. So you know myself, Noom and Oral are sort of helping steer
1:03:48
this and then we have people thinking about you know what is the pre-training uh setup look like what does the
1:03:53
infrastructure look like what does the post-raining recipe look like and what does the data preparation and eval
1:03:59
multimodal capabilities and IN capabilities um you know there's a lot of different
1:04:06
kinds of areas coding capabilities all these areas are are super important and it's really good to have people uh
1:04:13
paying close attention to those things and then also paying close attention to all the other things.
1:04:18
Yeah. I'm told Sergey is like very actively back and like very much involved in coding stuff.
1:04:24
Yep. Yeah. Yeah. Yeah. We all use the same micro kitchen. Yeah. Uh oh. Okay. Like there's so many
1:04:31
jumping off point. Uh so by the way I found out from the recent uh I mean you've probably told this story a few times but apparently Google brain was
1:04:37
also started in a micro kitchen. Yeah. Yeah. Just like your micro kitchens are very important.
1:04:42
Yeah. I don't know if people like understand. Yeah. Uh yeah, I actually bumped into
1:04:48
Andrew Ing who's a Stanford faculty member and uh I knew him from I'd given talks at Stanford a couple years before
1:04:54
so I sort of knew him and I'm like, "Oh, what are you doing here?" He's like, "Oh, I'm not sure yet. I just started, you know, a couple weeks ago. I'm going
1:05:00
to spend one day a week here consulting. Um I'm not sure what I'm working on, but my students at Stanford are starting to
1:05:06
get good results um on using u neural nets for speech uh recognition. I'm
1:05:12
like, "Oh, neural nets. I like neural nets." Like I remembered back to my 1990 thesis. I'm like, "Oh, that sounds
1:05:18
interesting. We should train really really big neural nets." So that was the which you say that and that's a very
1:05:24
interesting first instinct, which is that we should scale this up a lot. Yeah. Well, I mean, I felt like
1:05:31
Google is is has lots of computational uh capability and so if they were seeing
1:05:38
good results on, you know, what were effectively single GPU or uh models,
1:05:44
you know, if we were uh we actually didn't have GPUs in our data centers then we didn't have any accelerators. We
1:05:50
had lots of CPUs, but you know, we could build a software system that would enable you to distribute with both model
1:05:56
parallelism and data parallelism across lots of computers. And we ended up training a pretty big model was 50x
1:06:02
bigger than any previous neural net as far as we could tell. Um, so it's two billion parameters uh vision model uh
1:06:10
trained on 16,000 CPU cores for like multiple weeks. Uh and that's what gave
1:06:16
us really good it would gave us a 70% relative error improvement in imageet 22k which is the 22,000 category thing
1:06:24
and that's how we really saw okay scaling this up actually matters. We
1:06:29
didn't write a, you know, a sophisticated scaling analysis, but we had a a saying, bigger model, more data,
1:06:35
better results. And that was our our mantra for like six or seven years of scaling. And we every
1:06:42
time we did that, we saw better results in speech, in language, in in vision.
1:06:48
Uh, speaking of um bets, and this might and this, you know, I'll preface with like this might be a little bit more
1:06:54
sensitive topic, but you have obviously a lot of opinions about this. We had a previous guest, David Juan, who used to
1:06:59
work for you, and uh he he kind of like blames almost the brain marketplace as
1:07:05
like the reason that Google didn't invest enough in language models. And I wonder if that's uh something you would
1:07:12
you would agree with at the time or uh is there like a different sort of postmortm the brain marketplace for computers
Gemini Origin Story + Organizational Memo
1:07:18
compute quotas where basically he was like okay the like David worked at OpenAI as VP engine then
1:07:25
he worked at Google he was like fundamentally open was willing to go all in like bet the farm on one thing whereas Google was more democratic like
1:07:32
everyone had a had a quota and I was like okay like like if if you believe in scaling as an
1:07:37
important thing that's a that's an important organizationalwide decision to do. Yeah. Uh yeah, I mean I think uh I would
1:07:45
somewhat agree with that. I mean I think I actually wrote a one-page memo saying
1:07:51
we were being stupid by uh fragmenting our resources. Um so in particular at the time we had
1:08:00
uh you know uh efforts within Google research on uh and and in the brain team
1:08:05
in particular on large language models. We also had efforts on multimodal models
1:08:11
um in uh other parts of brain and and Google research and then legacy deep
1:08:17
mind had uh efforts like um chinchilla models and uh flamingo models. Uh and so
1:08:24
really we were fragmenting not only our compute uh across those separate efforts
1:08:31
but also our best people and our best ideas, right? And so I said this is just
1:08:36
stupid. Why don't we combine things and have one effort to uh train and this is the merge. Yeah.
1:08:42
To train an awesome single unified model that is multimodal from the start that's
1:08:47
good at everything and that was the origin of the Gemini effort and my one
1:08:54
page memo worked which is good. Did you have the name because also for those who don't know you named Gemini.
1:08:59
I did. Yeah. Yeah. There was there was another name proposed and I I said, you know, it's sort of like these two
1:09:05
organizations really are like uh twins in some sense coming together. Um so I
1:09:12
kind of like that. And then there's also the NASA interpretation of you know the early Gemini project
1:09:18
uh being an important thing on your way to um you know the Apollo project. So it
1:09:24
seemed like a good name. Twins coming together, right? Yeah. Nice. Um, I know we're
Coding with AI & Agent Interaction Style
1:09:30
already running out of time, but I'm curious how you use AI today to code. So, I mean, you're probably one of the
1:09:35
most prolific engineers in the history of computer science. Um, I was reading on through the article about you and
1:09:42
Sanji's friendship and how you work together and you have one quote about you need to find someone that you're going to pair
1:09:49
program with who's compatible with your way of thinking so that the two of you together are a complimentary force. Mhm.
1:09:55
And I was thinking about how you think about coding agents in this like how do you shape a coding agents to be compatible with
1:10:02
your way of thinking like h how would you rate the tools today? Like where should things go? Yeah. I mean first I think the coding
1:10:09
tools are you know getting vastly better compared to where they were a year or two two years ago. So now you can
1:10:15
actually rely on them to do more complex things that you as a as a software engineer want to accomplish and you can
1:10:22
sort of delegate you know pretty complex things to these tools. And I think one
1:10:28
really nice aspect about the uh interaction between a a human uh
1:10:34
software engineer and a a coding model that they're working with is your way of
1:10:41
talking to that uh coding model actually sort of uh dictates how it interacts
1:10:48
with you, right? Like you could ask it please write a bunch of good tests for this. You could ask it, please help me
1:10:55
brainstorm performance ideas. And your way of doing that is going to shape how
1:11:00
the model responds, what kinds of problems it tackles. You know, how much do you want the model to go off and do
1:11:06
things that are larger and more independent versus interact with it more to make sure that you're shaping the
1:11:12
right kinds of of things? And I think it's not the case that any one style is
1:11:18
the right thing for everything, right? like some kinds of problems you actually want uh maybe a more frequent
1:11:24
interaction style with the model and other ones you're just like, "Yeah, please just go write this cuz I I know I need this thing. I can specify it well
1:11:30
enough." Um and go off and do it and come back when you're done. And so I do think there's going to be more of a
1:11:38
style of having lots of independent uh software agents off doing things on your behalf and figuring out the right sort
1:11:45
of human computer interaction model and UI and so on for when should it interrupt you and say hey I need a
1:11:52
little more guidance here or I've done this thing now what now what should I do? Um I think we we're not at the end
1:11:58
all answer to that question and as the models get better that uh set of
1:12:03
decisions you put into how the interaction should happen may may change right like if you if you have a team of
1:12:12
50 interns how would you manage that if they were people and I think it's not
1:12:19
do you want 50 interns you might if they're really good right it's a lot of management
1:12:24
but but it's a lot of Uh yeah, I mean I think that is probably
1:12:29
within the realm of possibilities that lots of people could have 50 interns and so how would you actually deal with
1:12:36
that as a person, right? Like you would probably want them to form small sub teams so you don't have to interact with
1:12:42
50 of them. You could interact with five of five of those teams and they're off doing things on your behalf.
1:12:49
But I don't know exactly what the how this is going to unfold. Yeah. How do you think about bringing
1:12:55
people like the pair programming is always helpful to like get net new ideas in the distribution so to speak? It
1:13:02
feels as we have more of these coding agents write the code. It's hard to bring other people into the problem. Say
1:13:08
you go to like you know you have 50 interns right and then you want to go to nom shazir be like hey nom I want to
1:13:13
like pair on this thing but now there's like this huge amount of work that has been done in parallel that
1:13:18
you need to catch him up on right and I'm curious like if people are going to be in a way more isolated in their
1:13:24
teams where it's like okay there's so much context in these 50 interns that it's just hard for me to like relay
1:13:31
everything back to you maybe I mean on the other hand
1:13:36
like imagine a classical software or organization without any AI assisted tools, right? You would have, you know,
1:13:43
50 people doing stuff and their interaction style is going to be
1:13:48
naturally very hierarchical because, you know, these 50 people are going to
1:13:53
be working on this part of the system and not interact that much with these other people over here. But if you have,
1:13:59
you know, five people each managing 50 virtual agents, you know, they might be
1:14:05
able to actually have much higher bandwidth communication among the five people uh than you would have among five
1:14:11
people who are also trying to coordinate, you know, a 50 person software team each. Yeah. So,
1:14:16
how do you I'm curious how you change your just working rhythm, you know, like do you spend more time ahead with people
1:14:23
going through specs and design goals like um I mean I do think it's interesting
Prompting Skills & Spec Design
1:14:29
that you know whenever people were taught how to write software they were
1:14:34
taught that it's really important to write specifications super clearly. But no one really believed that. Like it was
1:14:40
like yeah whatever I don't need to do that I'm going to really I don't know. I mean, writing the
1:14:45
English the English language specification was never kind of an artifact that was really paid a lot of
1:14:51
attention to. I mean, it was important, but it wasn't sort of the thing that drove the actual creative process quite
1:14:58
as much as if you specify what software you want the agent to write for you, you'd better be pretty
1:15:06
darn careful in how you specify that because that's going to dictate the quality of the output, right? like if
1:15:11
you if you don't cover that it needs to handle this kind of thing or that this is a super important corner case or that
1:15:19
you know you really care about the performance of this part of it you know it may uh not do what you want and the
1:15:25
better you get at interacting with these models and and I think one of the ways
1:15:30
people will get better is they will get really good at crisply specifying things rather than leaving things to ambiguity
1:15:38
and that is actually probably not a bad It's not a bad skill to have regardless of whether you're a software engineer or
1:15:44
a you know trying to do some other kind of uh task. You know, being able to crisply specify what it is you want.
1:15:51
It's going to be really important. Yeah. My joke is um you know, good prompting is in uh indistinguishable
1:15:57
from sufficially advanced executive communication. Like it's like writing an internal memo. Like
1:16:03
Yeah. Yeah. Weigh your words very carefully. And also I think very important to be multimodal, right? I think one thing
1:16:08
that anti-gravity from from Google also did was like just come out the gate very very strong multimodal including videos
1:16:14
and that's the highest bandwidth communication prompt that you can give the the model which is fantastic.
1:16:20
Yeah. How do you collect things that you often you would have in your mind. So you have this amazing like performance hints
1:16:26
thing that you wrote about how to look for performance improvements and is there a lot more value in like people
1:16:33
writing these like generic things down so that they can then put them back as like potential retrieval artifacts for
1:16:40
the model like or do I have like the edge cases is like a good example right it's like if you're building systems you
1:16:46
already have in your mind specific edge cases depending on it but now you have to like every time repeat it
1:16:51
like are you having people spend a lot more I'm writing out more generic things to bring back or
1:16:57
um I mean I do think well-written guides of of how to do good
1:17:03
software engineering are going to be useful because they can be used as input to models or you know read by other
1:17:10
developers so that their prompts are you know more clear about what the the
1:17:15
underlying software system should should be doing. Um, you know, I think it may
1:17:21
not be that you need to create a custom one for every situation. If you have general guides and put those into, you
1:17:29
know, the context of a coding agent that that can be helpful like in you can
1:17:35
imagine one for distributed systems. You could say, okay, think about failures of these kinds of things and these are some
1:17:40
techniques you can deal with failures. you know, you can have uh, you know, Paxos like replication or, you know, you
1:17:47
can, uh, send the request to two places and tolerate failure because you only
1:17:52
need one of them to come back. You know, a little description of 20 techniques like that in building distributed
1:17:58
systems probably would go a long way to having a coding agent be able to sort of cobble up more reliable and robust
1:18:05
distributed systems. Yeah. Yeah. Wonder when Gemini will be able to build spanner,
1:18:11
right? Probably already has the code inside, you know. Yeah, that I mean that's a good example,
1:18:18
right? When you have like you know the cap theorem and it's like well this is like truth and you cannot break that and
1:18:24
then you build something that broke it. Like I'm curious like models in a way are like what did he say he broke it?
1:18:30
Would you say you broke cat theorem? Really? Yeah. Okay. All right. I mean
1:18:36
under local assumptions. Yeah. And some and they're like, you know, good clocks. Yeah. It's like some sometimes you don't
1:18:42
have to like always follow what is known to be true. And I I think models in a
1:18:48
way like if you tell them something, they like really buy into that, you know. Um so yeah, just more thinking than any
1:18:56
answer on how to fix that. Yeah, my my uh you know just on this like like big
1:19:01
prompting and and uh iteration you know I think that coming back to your latency point um I always I always trying to one
1:19:07
one AB test or experiment or benchmark or research I would like is what is the
1:19:13
uh performance difference between let's say three dumb fast model calls with human alignment because the human will
1:19:19
correct human alignment means the human looks at the first one and produces a new prompt for the second one as opposed to like
1:19:27
you spec it out, you know, you spend a long time writing a pro a big big fat prompt and then you have a very smart model do it, right? You know, because uh
1:19:34
really is is our lacks in performance uh an issue of like, well, you just haven't
1:19:40
specified well enough. There's no universe in which I can produce what you want because you just haven't told me, right? It's underspecified. So, I could
1:19:46
produce 10 different things and only one of them is the thing you wanted. Yeah. And the multi-turn taking with a
1:19:51
flash model is enough. Yeah. Yeah. I'm I'm a big believer in pushing
Latency Predictions & Tokens/sec Vision
1:19:58
on latency because I think being able to have really low latency interactions with a system you're using is just much
1:20:04
more delightful than something that is, you know, 10 times as slow or 20 times as slow. And I think, you know, in the
1:20:10
future, we'll see models that are and and underlying software and hardware systems that are 20x lower latency than
1:20:17
what we have today, 50x lower latency. And that's going to be really really important for systems that need to do a
1:20:24
lot of stuff uh between your interactions. Yeah. Yeah. There's two extremes, right? And then meanwhile you also have deep
1:20:31
think which is all the way on the other side, right? But you would use deep think all the time if it weren't for cost and
1:20:37
latency, right? If if you could have that capability in a model because the
1:20:42
latency improvement was 20x uh in the underlying hardware and system and costs, you know, there's no reason you
1:20:49
wouldn't want that. Yeah. But at the same time, then you'd probably have a model that is even
1:20:56
better that would take you 20 times longer even on that new hardware. Yeah. Uh you know that there's the Fredo
1:21:03
curve keeps climbing. Um yeah, onward and outward. on way.
1:21:09
Yeah. Should we ask him for predictions to to go? I don't know if you have any predictions that you that you like to
1:21:14
keep, you know, like uh one one way to do this is you have your tests whenever
1:21:19
a a new model comes out that you run. Uh what's something that you're not quite happy with yet that you think will get
1:21:26
done soon? Um let me make two predictions that are not
Future Predictions: Personal Models & Hardware
1:21:33
quite in that vein. Yeah. So I think a personalized model that knows you and knows all your state
1:21:39
and is able to retrieve over all state you have access to that you opt into is
1:21:44
going to be incredibly useful compared to a more generic model that doesn't have access to that. So like can
1:21:51
something attend to everything I've ever seen, every email, every photo, every video I've watched. That's going to be
1:21:58
really useful. uh I think uh more and more specialized hardware is going to
1:22:04
enable much lower latency models and much more capable models for affordable
1:22:10
prices uh than say the current current status quo. Uh that's going to be also
1:22:15
quite important. Yeah. When you say much lower latency, uh people usually talk in tokens per second. Is that a term that is okay?
1:22:22
Okay. Uh you know we're at let's say 100 now. Yeah, we can go to the thousands.
1:22:29
Is it meaningful to go 10 thousands? Yes. Really? Okay. Absolutely. Right. Yeah. Because of chain of thought and
1:22:35
all chain of thought reasoning. I mean you could think you know uh many more tokens. You could do many more parallel
1:22:41
rollouts. You could generate way more code uh and check that the code is correct with uh chain of thought
1:22:48
reasoning. So I think you know being able to do that at 10,000 tokens per second would be awesome. Yeah. At 10,000 tokens per second you
1:22:54
are no longer reading code. Yeah. like you'll just generate it. You'll not remember it may not it may not
1:23:00
end up with 10,000 tokens of code a thousand tokens of code that with
1:23:05
9,000 tokens of reasoning behind it. Yeah. Yeah. Which would actually be probably much better code to read.
Closing
1:23:11
Yeah. Yeah. Yeah. If I had more time, I would have written a shorter letter. Yeah. Yeah.
1:23:16
Um awesome, Jeff. This was amazing. Thanks for making the time. Thank you. It's been it's been fun.
1:23:21
Thanks for having me.