*AI Summary*
*Abstract:*
This presentation outlines the technical evolution and reverse-engineering methodology of the Asahi Linux project, which aims to port Linux to Apple Silicon (M-series) platforms. The speaker details how Apple’s intentional "One True Recovery" mechanism allows for unsigned code execution at Exception Level 2 (EL2), providing a legitimate pathway for third-party kernels without requiring hardware exploits. Central to the project's success is the development of `m1n1`, a multi-purpose tool that functions as a bootloader, a Python-integrated hardware probing shell, and a hypervisor. This hypervisor allows developers to run macOS (XNU) as a guest to trace Memory Mapped I/O (MMIO) accesses in real-time, effectively creating an "strace for hardware" to document proprietary registers.
The talk highlights recent milestones, including the transition from feature-heavy downstream forks to a sustainable "upstream-first" development model for core drivers like USB 3.0 and the system controller. Technical deep dives reveal specific hardware idiosyncrasies, such as the Apple USB controller’s requirement for a full hardware reset upon device disconnection. Finally, the session provides a status update on M3 support—demonstrating initial boot success and basic functionality—while addressing the reverse-engineering challenges posed by M4 and M5 architectures, which restrict certain virtualization instructions previously used for hardware tracing.
*Engineering Analysis: Asahi Linux Methodology and Apple Silicon Hardware Parity*
* *0:33 Project Credits and Collaboration:* The speaker acknowledges the multi-disciplinary effort required to build a conformant user-space stack for unknown GPUs and complex kernel drivers, specifically crediting project founder Hector "Marcan" Martin for initial tooling.
* *3:24 Intentional Boot Architecture:* Unlike iOS devices, Apple Silicon Macs provide a "One True Recovery" (1TR) mode that allows users to authorize custom boot objects. This grants full control at EL2 (highest CPU privilege) without management engine interference.
* *7:43 Rapid Prototyping via `m1n1`:* The `m1n1` tool provides a Python proxy over a UART-to-USB connection, allowing engineers to poke hardware registers in real-time. This bypasses the traditional "code-compile-reboot" cycle, enabling rapid hardware model verification.
* *10:56 Hypervisor-Based Tracing:* By running macOS (XNU) within a specialized VM managed by `m1n1`, developers can trap and log MMIO accesses. This allows the team to observe how Apple’s proprietary drivers interact with the hardware, facilitating the documentation of undocumented registers.
* *14:52 Technical Debt and Upstreaming:* The project has pivoted from maintaining a massive downstream patch set to upstreaming drivers (USB 3, audio, system controllers) into the mainline Linux kernel to ensure long-term sustainability across distributions.
* *20:03 USB Controller Complexity:* Apple utilizes a modified Synopsys DesignWare USB3 controller. While registers are similar to version 3.0, the implementation lacks an official data sheet, requiring cross-referencing with other SoC vendors (Intel/Rockchip).
* *24:26 Hardware Idiosyncrasy (Reset Logic):* Tracing revealed that the Apple USB controller requires a full hardware reset, port reset, and clock gating upon every device disconnect to allow subsequent re-initialization.
* *27:37 DisplayPort Achievement:* The project recently achieved functional DisplayPort output on Linux, involving complex coordination between the display controller and the USB 3.5 PHY for signal serialization.
* *31:21 M3 Platform Progress:* Initial support for the M3 architecture is underway, with successful boots and basic storage functionality confirmed by new contributors.
* *32:19 M4/M5 Reverse Engineering Challenges:* Newer chips (M4/M5) introduce "Guarded Levels" that restrict certain virtualization instructions. This prevents `m1n1` from tracing XNU MMIO accesses, necessitating new, more difficult reverse-engineering strategies.
AI-generated summary created with gemini-3-flash-preview for free via RocketRecap-dot-com. (Input: 24,328 tokens, Output: 857 tokens, Est. cost: $0.0147).
Below, I will provide input for an example video (comprising of title, description, and transcript, in this order) and the corresponding abstract and summary I expect. Afterward, I will provide a new transcript that I want a summarization in the same format.
**Please give an abstract of the transcript and then summarize the transcript in a self-contained bullet list format.** Include starting timestamps, important details and key takeaways.
Example Input:
Fluidigm Polaris Part 2- illuminator and camera
mikeselectricstuff
131K subscribers
Subscribed
369
Share
Download
Clip
Save
5,857 views Aug 26, 2024
Fluidigm Polaris part 1 : • Fluidigm Polaris (Part 1) - Biotech g...
Ebay listings: https://www.ebay.co.uk/usr/mikeselect...
Merch https://mikeselectricstuff.creator-sp...
Transcript
Follow along using the transcript.
Show transcript
mikeselectricstuff
131K subscribers
Videos
About
Support on Patreon
40 Comments
@robertwatsonbath
6 hours ago
Thanks Mike. Ooof! - with the level of bodgery going on around 15:48 I think shame would have made me do a board re spin, out of my own pocket if I had to.
1
Reply
@Muonium1
9 hours ago
The green LED looks different from the others and uses phosphor conversion because of the "green gap" problem where green InGaN emitters suffer efficiency droop at high currents. Phosphide based emitters don't start becoming efficient until around 600nm so also can't be used for high power green emitters. See the paper and plot by Matthias Auf der Maur in his 2015 paper on alloy fluctuations in InGaN as the cause of reduced external quantum efficiency at longer (green) wavelengths.
4
Reply
1 reply
@tafsirnahian669
10 hours ago (edited)
Can this be used as an astrophotography camera?
Reply
mikeselectricstuff
·
1 reply
@mikeselectricstuff
6 hours ago
Yes, but may need a shutter to avoid light during readout
Reply
@2010craggy
11 hours ago
Narrowband filters we use in Astronomy (Astrophotography) are sided- they work best passing light in one direction so I guess the arrows on the filter frames indicate which way round to install them in the filter wheel.
1
Reply
@vitukz
12 hours ago
A mate with Channel @extractions&ire could use it
2
Reply
@RobertGallop
19 hours ago
That LED module says it can go up to 28 amps!!! 21 amps for 100%. You should see what it does at 20 amps!
Reply
@Prophes0r
19 hours ago
I had an "Oh SHIT!" moment when I realized that the weird trapezoidal shape of that light guide was for keystone correction of the light source.
Very clever.
6
Reply
@OneBiOzZ
20 hours ago
given the cost of the CCD you think they could have run another PCB for it
9
Reply
@tekvax01
21 hours ago
$20 thousand dollars per minute of run time!
1
Reply
@tekvax01
22 hours ago
"We spared no expense!" John Hammond Jurassic Park.
*(that's why this thing costs the same as a 50-seat Greyhound Bus coach!)
Reply
@florianf4257
22 hours ago
The smearing on the image could be due to the fact that you don't use a shutter, so you see brighter stripes under bright areas of the image as you still iluminate these pixels while the sensor data ist shifted out towards the top. I experienced this effect back at university with a LN-Cooled CCD for Spectroscopy. The stripes disapeared as soon as you used the shutter instead of disabling it in the open position (but fokussing at 100ms integration time and continuous readout with a focal plane shutter isn't much fun).
12
Reply
mikeselectricstuff
·
1 reply
@mikeselectricstuff
12 hours ago
I didn't think of that, but makes sense
2
Reply
@douro20
22 hours ago (edited)
The red LED reminds me of one from Roithner Lasertechnik. I have a Symbol 2D scanner which uses two very bright LEDs from that company, one red and one red-orange. The red-orange is behind a lens which focuses it into an extremely narrow beam.
1
Reply
@RicoElectrico
23 hours ago
PFG is Pulse Flush Gate according to the datasheet.
Reply
@dcallan812
23 hours ago
Very interesting. 2x
Reply
@littleboot_
1 day ago
Cool interesting device
Reply
@dav1dbone
1 day ago
I've stripped large projectors, looks similar, wonder if some of those castings are a magnesium alloy?
Reply
@kevywevvy8833
1 day ago
ironic that some of those Phlatlight modules are used in some of the cheapest disco lights.
1
Reply
1 reply
@bill6255
1 day ago
Great vid - gets right into subject in title, its packed with information, wraps up quickly. Should get a YT award! imho
3
Reply
@JAKOB1977
1 day ago (edited)
The whole sensor module incl. a 5 grand 50mpix sensor for 49 £.. highest bid atm
Though also a limited CCD sensor, but for the right buyer its a steal at these relative low sums.
Architecture Full Frame CCD (Square Pixels)
Total Number of Pixels 8304 (H) × 6220 (V) = 51.6 Mp
Number of Effective Pixels 8208 (H) × 6164 (V) = 50.5 Mp
Number of Active Pixels 8176 (H) × 6132 (V) = 50.1 Mp
Pixel Size 6.0 m (H) × 6.0 m (V)
Active Image Size 49.1 mm (H) × 36.8 mm (V)
61.3 mm (Diagonal),
645 1.1x Optical Format
Aspect Ratio 4:3
Horizontal Outputs 4
Saturation Signal 40.3 ke−
Output Sensitivity 31 V/e−
Quantum Efficiency
KAF−50100−CAA
KAF−50100−AAA
KAF−50100−ABA (with Lens)
22%, 22%, 16% (Peak R, G, B)
25%
62%
Read Noise (f = 18 MHz) 12.5 e−
Dark Signal (T = 60°C) 42 pA/cm2
Dark Current Doubling Temperature 5.7°C
Dynamic Range (f = 18 MHz) 70.2 dB
Estimated Linear Dynamic Range
(f = 18 MHz)
69.3 dB
Charge Transfer Efficiency
Horizontal
Vertical
0.999995
0.999999
Blooming Protection
(4 ms Exposure Time)
800X Saturation Exposure
Maximum Date Rate 18 MHz
Package Ceramic PGA
Cover Glass MAR Coated, 2 Sides or
Clear Glass
Features
• TRUESENSE Transparent Gate Electrode
for High Sensitivity
• Ultra-High Resolution
• Board Dynamic Range
• Low Noise Architecture
• Large Active Imaging Area
Applications
• Digitization
• Mapping/Aerial
• Photography
• Scientific
Thx for the tear down Mike, always a joy
Reply
@martinalooksatthings
1 day ago
15:49 that is some great bodging on of caps, they really didn't want to respin that PCB huh
8
Reply
@RhythmGamer
1 day ago
Was depressed today and then a new mike video dropped and now I’m genuinely happy to get my tear down fix
1
Reply
@dine9093
1 day ago (edited)
Did you transfrom into Mr Blobby for a moment there?
2
Reply
@NickNorton
1 day ago
Thanks Mike. Your videos are always interesting.
5
Reply
@KeritechElectronics
1 day ago
Heavy optics indeed... Spare no expense, cost no object. Splendid build quality. The CCD is a thing of beauty!
1
Reply
@YSoreil
1 day ago
The pricing on that sensor is about right, I looked in to these many years ago when they were still in production since it's the only large sensor you could actually buy. Really cool to see one in the wild.
2
Reply
@snik2pl
1 day ago
That leds look like from led projector
Reply
@vincei4252
1 day ago
TDI = Time Domain Integration ?
1
Reply
@wolpumba4099
1 day ago (edited)
Maybe the camera should not be illuminated during readout.
From the datasheet of the sensor (Onsemi): saturation 40300 electrons, read noise 12.5 electrons per pixel @ 18MHz (quite bad). quantum efficiency 62% (if it has micro lenses), frame rate 1 Hz. lateral overflow drain to prevent blooming protects against 800x (factor increases linearly with exposure time) saturation exposure (32e6 electrons per pixel at 4ms exposure time), microlens has +/- 20 degree acceptance angle
i guess it would be good for astrophotography
4
Reply
@txm100
1 day ago (edited)
Babe wake up a new mikeselectricstuff has dropped!
9
Reply
@vincei4252
1 day ago
That looks like a finger-lakes filter wheel, however, for astronomy they'd never use such a large stepper.
1
Reply
@MRooodddvvv
1 day ago
yaaaaay ! more overcomplicated optical stuff !
4
Reply
1 reply
@NoPegs
1 day ago
He lives!
11
Reply
1 reply
Transcript
0:00
so I've stripped all the bits of the
0:01
optical system so basically we've got
0:03
the uh the camera
0:05
itself which is mounted on this uh very
0:09
complex
0:10
adjustment thing which obviously to set
0:13
you the various tilt and uh alignment
0:15
stuff then there's two of these massive
0:18
lenses I've taken one of these apart I
0:20
think there's something like about eight
0:22
or nine Optical elements in here these
0:25
don't seem to do a great deal in terms
0:26
of electr magnification they're obiously
0:28
just about getting the image to where it
0:29
uh where it needs to be just so that
0:33
goes like that then this Optical block I
0:36
originally thought this was made of some
0:37
s crazy heavy material but it's just
0:39
really the sum of all these Optical bits
0:41
are just ridiculously heavy those lenses
0:43
are about 4 kilos each and then there's
0:45
this very heavy very solid um piece that
0:47
goes in the middle and this is so this
0:49
is the filter wheel assembly with a
0:51
hilariously oversized steper
0:53
motor driving this wheel with these very
0:57
large narrow band filters so we've got
1:00
various different shades of uh
1:03
filters there five Al together that
1:06
one's actually just showing up a silver
1:07
that's actually a a red but fairly low
1:10
transmission orangey red blue green
1:15
there's an excess cover on this side so
1:16
the filters can be accessed and changed
1:19
without taking anything else apart even
1:21
this is like ridiculous it's like solid
1:23
aluminium this is just basically a cover
1:25
the actual wavelengths of these are um
1:27
488 525 570 630 and 700 NM not sure what
1:32
the suffix on that perhaps that's the uh
1:34
the width of the spectral line say these
1:37
are very narrow band filters most of
1:39
them are you very little light through
1:41
so it's still very tight narrow band to
1:43
match the um fluoresence of the dies
1:45
they're using in the biochemical process
1:48
and obviously to reject the light that's
1:49
being fired at it from that Illuminator
1:51
box and then there's a there's a second
1:53
one of these lenses then the actual sort
1:55
of samples below that so uh very serious
1:58
amount of very uh chunky heavy Optics
2:01
okay let's take a look at this light
2:02
source made by company Lumen Dynamics
2:04
who are now part of
2:06
excelitas self-contained unit power
2:08
connector USB and this which one of the
2:11
Cable Bundle said was a TTL interface
2:14
USB wasn't used in uh the fluid
2:17
application output here and I think this
2:19
is an input for um light feedback I
2:21
don't if it's regulated or just a measur
2:23
measurement facility and the uh fiber
2:27
assembly
2:29
Square Inlet there and then there's two
2:32
outputs which have uh lens assemblies
2:35
and this small one which goes back into
2:37
that small Port just Loops out of here
2:40
straight back in So on this side we've
2:42
got the electronics which look pretty
2:44
straightforward we've got a bit of power
2:45
supply stuff over here and we've got
2:48
separate drivers for each wavelength now
2:50
interesting this is clearly been very
2:52
specifically made for this application
2:54
you I was half expecting like say some
2:56
generic drivers that could be used for a
2:58
number of different things but actually
3:00
literally specified the exact wavelength
3:02
on the PCB there is provision here for
3:04
385 NM which isn't populated but this is
3:07
clearly been designed very specifically
3:09
so these four drivers look the same but
3:10
then there's two higher power ones for
3:12
575 and
3:14
520 a slightly bigger heat sink on this
3:16
575 section there a p 24 which is
3:20
providing USB interface USB isolator the
3:23
USB interface just presents as a comport
3:26
I did have a quick look but I didn't
3:27
actually get anything sensible um I did
3:29
dump the Pi code out and there's a few
3:31
you a few sort of commands that you
3:32
could see in text but I didn't actually
3:34
manage to get it working properly I
3:36
found some software for related version
3:38
but it didn't seem to want to talk to it
3:39
but um I say that wasn't used for the
3:41
original application it might be quite
3:42
interesting to get try and get the Run
3:44
hours count out of it and the TTL
3:46
interface looks fairly straightforward
3:48
we've got positions for six opto
3:50
isolators but only five five are
3:52
installed so that corresponds with the
3:54
unused thing so I think this hopefully
3:56
should be as simple as just providing a
3:57
ttrl signal for each color to uh enable
4:00
it a big heat sink here which is there I
4:03
think there's like a big S of metal
4:04
plate through the middle of this that
4:05
all the leads are mounted on the other
4:07
side so this is heat sinking it with a
4:09
air flow from a uh just a fan in here
4:13
obviously don't have the air flow
4:14
anywhere near the Optics so conduction
4:17
cool through to this plate that's then
4:18
uh air cooled got some pots which are
4:21
presumably power
4:22
adjustments okay let's take a look at
4:24
the other side which is uh much more
4:27
interesting see we've got some uh very
4:31
uh neatly Twisted cable assemblies there
4:35
a bunch of leads so we've got one here
4:37
475 up here 430 NM 630 575 and 520
4:44
filters and dcro mirrors a quick way to
4:48
see what's white is if we just shine
4:49
some white light through
4:51
here not sure how it is is to see on the
4:54
camera but shining white light we do
4:55
actually get a bit of red a bit of blue
4:57
some yellow here so the obstacle path
5:00
575 it goes sort of here bounces off
5:03
this mirror and goes out the 520 goes
5:07
sort of down here across here and up
5:09
there 630 goes basically straight
5:13
through
5:15
430 goes across there down there along
5:17
there and the 475 goes down here and
5:20
left this is the light sensing thing
5:22
think here there's just a um I think
5:24
there a photo diode or other sensor
5:26
haven't actually taken that off and
5:28
everything's fixed down to this chunk of
5:31
aluminium which acts as the heat
5:32
spreader that then conducts the heat to
5:33
the back side for the heat
5:35
sink and the actual lead packages all
5:38
look fairly similar except for this one
5:41
on the 575 which looks quite a bit more
5:44
substantial big spay
5:46
Terminals and the interface for this
5:48
turned out to be extremely simple it's
5:50
literally a 5V TTL level to enable each
5:54
color doesn't seem to be any tensity
5:56
control but there are some additional
5:58
pins on that connector that weren't used
5:59
in the through time thing so maybe
6:01
there's some extra lines that control
6:02
that I couldn't find any data on this uh
6:05
unit and the um their current product
6:07
range is quite significantly different
6:09
so we've got the uh blue these
6:13
might may well be saturating the camera
6:16
so they might look a bit weird so that's
6:17
the 430
6:18
blue the 575
6:24
yellow uh
6:26
475 light blue
6:29
the uh 520
6:31
green and the uh 630 red now one
6:36
interesting thing I noticed for the
6:39
575 it's actually it's actually using a
6:42
white lead and then filtering it rather
6:44
than using all the other ones are using
6:46
leads which are the fundamental colors
6:47
but uh this is actually doing white and
6:50
it's a combination of this filter and
6:52
the dichroic mirrors that are turning to
6:55
Yellow if we take the filter out and a
6:57
lot of the a lot of the um blue content
7:00
is going this way the red is going
7:02
straight through these two mirrors so
7:05
this is clearly not reflecting much of
7:08
that so we end up with the yellow coming
7:10
out of uh out of there which is a fairly
7:14
light yellow color which you don't
7:16
really see from high intensity leads so
7:19
that's clearly why they've used the
7:20
white to uh do this power consumption of
7:23
the white is pretty high so going up to
7:25
about 2 and 1 half amps on that color
7:27
whereas most of the other colors are
7:28
only drawing half an amp or so at 24
7:30
volts the uh the green is up to about
7:32
1.2 but say this thing is uh much
7:35
brighter and if you actually run all the
7:38
colors at the same time you get a fairly
7:41
reasonable um looking white coming out
7:43
of it and one thing you might just be
7:45
out to notice is there is some sort
7:46
color banding around here that's not
7:49
getting uh everything s completely
7:51
concentric and I think that's where this
7:53
fiber optic thing comes
7:58
in I'll
8:00
get a couple of Fairly accurately shaped
8:04
very sort of uniform color and looking
8:06
at What's um inside here we've basically
8:09
just got this Square Rod so this is
8:12
clearly yeah the lights just bouncing
8:13
off all the all the various sides to um
8:16
get a nice uniform illumination uh this
8:19
back bit looks like it's all potted so
8:21
nothing I really do to get in there I
8:24
think this is fiber so I have come
8:26
across um cables like this which are
8:27
liquid fill but just looking through the
8:30
end of this it's probably a bit hard to
8:31
see it does look like there fiber ends
8:34
going going on there and so there's this
8:36
feedback thing which is just obviously
8:39
compensating for the any light losses
8:41
through here to get an accurate
8:43
representation of uh the light that's
8:45
been launched out of these two
8:47
fibers and you see uh
8:49
these have got this sort of trapezium
8:54
shape light guides again it's like a
8:56
sort of acrylic or glass light guide
9:00
guess projected just to make the right
9:03
rectangular
9:04
shape and look at this Center assembly
9:07
um the light output doesn't uh change
9:10
whether you feed this in or not so it's
9:11
clear not doing any internal Clos Loop
9:14
control obviously there may well be some
9:16
facility for it to do that but it's not
9:17
being used in this
9:19
application and so this output just
9:21
produces a voltage on the uh outle
9:24
connector proportional to the amount of
9:26
light that's present so there's a little
9:28
diffuser in the back there
9:30
and then there's just some kind of uh
9:33
Optical sensor looks like a
9:35
chip looking at the lead it's a very
9:37
small package on the PCB with this lens
9:40
assembly over the top and these look
9:43
like they're actually on a copper
9:44
Metalized PCB for maximum thermal
9:47
performance and yeah it's a very small
9:49
package looks like it's a ceramic
9:51
package and there's a thermister there
9:53
for temperature monitoring this is the
9:56
475 blue one this is the 520 need to
9:59
Green which is uh rather different OB
10:02
it's a much bigger D with lots of bond
10:04
wise but also this looks like it's using
10:05
a phosphor if I shine a blue light at it
10:08
lights up green so this is actually a
10:10
phosphor conversion green lead which
10:12
I've I've come across before they want
10:15
that specific wavelength so they may be
10:17
easier to tune a phosphor than tune the
10:20
um semiconductor material to get the uh
10:23
right right wavelength from the lead
10:24
directly uh red 630 similar size to the
10:28
blue one or does seem to have a uh a
10:31
lens on top of it there is a sort of red
10:33
coloring to
10:35
the die but that doesn't appear to be
10:38
fluorescent as far as I can
10:39
tell and the white one again a little
10:41
bit different sort of much higher
10:43
current
10:46
connectors a makeer name on that
10:48
connector flot light not sure if that's
10:52
the connector or the lead
10:54
itself and obviously with the phosphor
10:56
and I'd imagine that phosphor may well
10:58
be tuned to get the maximum to the uh 5
11:01
cenm and actually this white one looks
11:04
like a St fairly standard product I just
11:06
found it in Mouse made by luminous
11:09
devices in fact actually I think all
11:11
these are based on various luminous
11:13
devices modules and they're you take
11:17
looks like they taking the nearest
11:18
wavelength and then just using these
11:19
filters to clean it up to get a precise
11:22
uh spectral line out of it so quite a
11:25
nice neat and um extreme
11:30
bright light source uh sure I've got any
11:33
particular use for it so I think this
11:35
might end up on
11:36
eBay but uh very pretty to look out and
11:40
without the uh risk of burning your eyes
11:43
out like you do with lasers so I thought
11:45
it would be interesting to try and
11:46
figure out the runtime of this things
11:48
like this we usually keep some sort
11:49
record of runtime cuz leads degrade over
11:51
time I couldn't get any software to work
11:52
through the USB face but then had a
11:54
thought probably going to be writing the
11:55
runtime periodically to the e s prom so
11:58
I just just scope up that and noticed it
12:00
was doing right every 5 minutes so I
12:02
just ran it for a while periodically
12:04
reading the E squ I just held the pick
12:05
in in reset and um put clip over to read
12:07
the square prom and found it was writing
12:10
one location per color every 5 minutes
12:12
so if one color was on it would write
12:14
that location every 5 minutes and just
12:16
increment it by one so after doing a few
12:18
tests with different colors of different
12:19
time periods it looked extremely
12:21
straightforward it's like a four bite
12:22
count for each color looking at the
12:24
original data that was in it all the
12:26
colors apart from Green were reading
12:28
zero and the green was reading four
12:30
indicating a total 20 minutes run time
12:32
ever if it was turned on run for a short
12:34
time then turned off that might not have
12:36
been counted but even so indicates this
12:37
thing wasn't used a great deal the whole
12:40
s process of doing a run can be several
12:42
hours but it'll only be doing probably
12:43
the Imaging at the end of that so you
12:46
wouldn't expect to be running for a long
12:47
time but say a single color for 20
12:50
minutes over its whole lifetime does
12:52
seem a little bit on the low side okay
12:55
let's look at the camera un fortunately
12:57
I managed to not record any sound when I
12:58
did this it's also a couple of months
13:00
ago so there's going to be a few details
13:02
that I've forgotten so I'm just going to
13:04
dub this over the original footage so um
13:07
take the lid off see this massive great
13:10
heat sink so this is a pel cool camera
13:12
we've got this blower fan producing a
13:14
fair amount of air flow through
13:16
it the connector here there's the ccds
13:19
mounted on the board on the
13:24
right this unplugs so we've got a bit of
13:27
power supply stuff on here
13:29
USB interface I think that's the Cyprus
13:32
microcontroller High speeded USB
13:34
interface there's a zyink spon fpga some
13:40
RAM and there's a couple of ATD
13:42
converters can't quite read what those
13:45
those are but anal
13:47
devices um little bit of bodgery around
13:51
here extra decoupling obviously they
13:53
have having some noise issues this is
13:55
around the ram chip quite a lot of extra
13:57
capacitors been added there
13:59
uh there's a couple of amplifiers prior
14:01
to the HD converter buffers or Andor
14:05
amplifiers taking the CCD
14:08
signal um bit more power spy stuff here
14:11
this is probably all to do with
14:12
generating the various CCD bias voltages
14:14
they uh need quite a lot of exotic
14:18
voltages next board down is just a
14:20
shield and an interconnect
14:24
boardly shielding the power supply stuff
14:26
from some the more sensitive an log
14:28
stuff
14:31
and this is the bottom board which is
14:32
just all power supply
14:34
stuff as you can see tons of capacitors
14:37
or Transformer in
14:42
there and this is the CCD which is a uh
14:47
very impressive thing this is a kf50 100
14:50
originally by true sense then codec
14:53
there ON
14:54
Semiconductor it's 50 megapixels uh the
14:58
only price I could find was this one
15:00
5,000 bucks and the architecture you can
15:03
see there actually two separate halves
15:04
which explains the Dual AZ converters
15:06
and two amplifiers it's literally split
15:08
down the middle and duplicated so it's
15:10
outputting two streams in parallel just
15:13
to keep the bandwidth sensible and it's
15:15
got this amazing um diffraction effects
15:18
it's got micro lenses over the pixel so
15:20
there's there's a bit more Optics going
15:22
on than on a normal
15:25
sensor few more bodges on the CCD board
15:28
including this wire which isn't really
15:29
tacked down very well which is a bit uh
15:32
bit of a mess quite a few bits around
15:34
this board where they've uh tacked
15:36
various bits on which is not super
15:38
impressive looks like CCD drivers on the
15:40
left with those 3 ohm um damping
15:43
resistors on the
15:47
output get a few more little bodges
15:50
around here some of
15:52
the and there's this separator the
15:54
silica gel to keep the moisture down but
15:56
there's this separator that actually
15:58
appears to be cut from piece of
15:59
antistatic
16:04
bag and this sort of thermal block on
16:06
top of this stack of three pel Cola
16:12
modules so as with any Stacks they get
16:16
um larger as they go back towards the
16:18
heat sink because each P's got to not
16:20
only take the heat from the previous but
16:21
also the waste heat which is quite
16:27
significant you see a little temperature
16:29
sensor here that copper block which
16:32
makes contact with the back of the
16:37
CCD and this's the back of the
16:40
pelas this then contacts the heat sink
16:44
on the uh rear there a few thermal pads
16:46
as well for some of the other power
16:47
components on this
16:51
PCB okay I've connected this uh camera
16:54
up I found some drivers on the disc that
16:56
seem to work under Windows 7 couldn't
16:58
get to install under Windows 11 though
17:01
um in the absence of any sort of lens or
17:03
being bothered to the proper amount I've
17:04
just put some f over it and put a little
17:06
pin in there to make a pinhole lens and
17:08
software gives a few options I'm not
17:11
entirely sure what all these are there's
17:12
obviously a clock frequency 22 MHz low
17:15
gain and with PFG no idea what that is
17:19
something something game programmable
17:20
Something game perhaps ver exposure
17:23
types I think focus is just like a
17:25
continuous grab until you tell it to
17:27
stop not entirely sure all these options
17:30
are obviously exposure time uh triggers
17:33
there ex external hardware trigger inut
17:35
you just trigger using a um thing on
17:37
screen so the resolution is 8176 by
17:40
6132 and you can actually bin those
17:42
where you combine multiple pixels to get
17:46
increased gain at the expense of lower
17:48
resolution down this is a 10sec exposure
17:51
obviously of the pin hole it's very uh
17:53
intensitive so we just stand still now
17:56
downloading it there's the uh exposure
17:59
so when it's
18:01
um there's a little status thing down
18:03
here so that tells you the um exposure
18:07
[Applause]
18:09
time it's this is just it
18:15
downloading um it is quite I'm seeing
18:18
quite a lot like smearing I think that I
18:20
don't know whether that's just due to
18:21
pixels overloading or something else I
18:24
mean yeah it's not it's not um out of
18:26
the question that there's something not
18:27
totally right about this camera
18:28
certainly was bodge wise on there um I
18:31
don't I'd imagine a camera like this
18:32
it's got a fairly narrow range of
18:34
intensities that it's happy with I'm not
18:36
going to spend a great deal of time on
18:38
this if you're interested in this camera
18:40
maybe for astronomy or something and
18:42
happy to sort of take the risk of it may
18:44
not be uh perfect I'll um I think I'll
18:47
stick this on eBay along with the
18:48
Illuminator I'll put a link down in the
18:50
description to the listing take your
18:52
chances to grab a bargain so for example
18:54
here we see this vertical streaking so
18:56
I'm not sure how normal that is this is
18:58
on fairly bright scene looking out the
19:02
window if I cut the exposure time down
19:04
on that it's now 1 second
19:07
exposure again most of the image
19:09
disappears again this is looks like it's
19:11
possibly over still overloading here go
19:14
that go down to say say quarter a
19:16
second so again I think there might be
19:19
some Auto gain control going on here um
19:21
this is with the PFG option let's try
19:23
turning that off and see what
19:25
happens so I'm not sure this is actually
19:27
more streaking or which just it's
19:29
cranked up the gain all the dis display
19:31
gray scale to show what um you know the
19:33
range of things that it's captured
19:36
there's one of one of 12 things in the
19:38
software there's um you can see of you
19:40
can't seem to read out the temperature
19:42
of the pelta cooler but you can set the
19:44
temperature and if you said it's a
19:46
different temperature you see the power
19:48
consumption jump up running the cooler
19:50
to get the temperature you requested but
19:52
I can't see anything anywhere that tells
19:54
you whether the cool is at the at the
19:56
temperature other than the power
19:57
consumption going down and there's no
19:59
temperature read out
20:03
here and just some yeah this is just
20:05
sort of very basic software I'm sure
20:07
there's like an API for more
20:09
sophisticated
20:10
applications but so if you know anything
20:12
more about these cameras please um stick
20:14
in the
20:15
comments um incidentally when I was
20:18
editing I didn't notice there was a bent
20:19
pin on the um CCD but I did fix that
20:22
before doing these tests and also
20:24
reactivated the um silica gel desicant
20:26
cuz I noticed it was uh I was getting
20:28
bit of condensation on the window but um
20:31
yeah so a couple of uh interesting but
20:34
maybe not particularly uh useful pieces
20:37
of Kit except for someone that's got a
20:38
very specific use so um I'll stick a
20:42
I'll stick these on eBay put a link in
20:44
the description and say hopefully
20:45
someone could actually make some uh good
20:47
use of these things
Example Output:
**Abstract:**
This video presents Part 2 of a teardown focusing on the optical components of a Fluidigm Polaris biotechnology instrument, specifically the multi-wavelength illuminator and the high-resolution CCD camera.
The Lumen Dynamics illuminator unit is examined in detail, revealing its construction using multiple high-power LEDs (430nm, 475nm, 520nm, 575nm, 630nm) combined via dichroic mirrors and filters. A square fiber optic rod is used to homogenize the light. A notable finding is the use of a phosphor-converted white LED filtered to achieve the 575nm output. The unit features simple TTL activation for each color, conduction cooling, and internal homogenization optics. Analysis of its EEPROM suggests extremely low operational runtime.
The camera module teardown showcases a 50 Megapixel ON Semiconductor KAF-50100 CCD sensor with micro-lenses, cooled by a multi-stage Peltier stack. The control electronics include an FPGA and a USB interface. Significant post-manufacturing modifications ("bodges") are observed on the camera's circuit boards. Basic functional testing using vendor software and a pinhole lens confirms image capture but reveals prominent vertical streaking artifacts, the cause of which remains uncertain (potential overload, readout artifact, or fault).
**Exploring the Fluidigm Polaris: A Detailed Look at its High-End Optics and Camera System**
* **0:00 High-End Optics:** The system utilizes heavy, high-quality lenses and mirrors for precise imaging, weighing around 4 kilos each.
* **0:49 Narrow Band Filters:** A filter wheel with five narrow band filters (488, 525, 570, 630, and 700 nm) ensures accurate fluorescence detection and rejection of excitation light.
* **2:01 Customizable Illumination:** The Lumen Dynamics light source offers five individually controllable LED wavelengths (430, 475, 520, 575, 630 nm) with varying power outputs. The 575nm yellow LED is uniquely achieved using a white LED with filtering.
* **3:45 TTL Control:** The light source is controlled via a simple TTL interface, enabling easy on/off switching for each LED color.
* **12:55 Sophisticated Camera:** The system includes a 50-megapixel Kodak KAI-50100 CCD camera with a Peltier cooling system for reduced noise.
* **14:54 High-Speed Data Transfer:** The camera features dual analog-to-digital converters to manage the high data throughput of the 50-megapixel sensor, which is effectively two 25-megapixel sensors operating in parallel.
* **18:11 Possible Issues:** The video creator noted some potential issues with the camera, including image smearing.
* **18:11 Limited Dynamic Range:** The camera's sensor has a limited dynamic range, making it potentially challenging to capture scenes with a wide range of brightness levels.
* **11:45 Low Runtime:** Internal data suggests the system has seen minimal usage, with only 20 minutes of recorded runtime for the green LED.
* **20:38 Availability on eBay:** Both the illuminator and camera are expected to be listed for sale on eBay.
Here is the real transcript. What would be a good group of people to review this topic? Please summarize provide a summary like they would:
00:07 All right. Thanks, everyone, for being here.
00:20 I'm very excited to be here and to be giving this presentation,
00:23 especially this early in the morning.
00:25 And before we get started,
00:28 I want to -- oops, that was too fast.
00:33 I want to start with the credits, because it's not just me.
00:35 This is an entire project with many people contributing to this,
00:37 and this is also the only way this is possible,
00:39 because we need to do a lot of work and write a lot of drivers.
00:43 I, unfortunately, don't have time to call out
00:44 all of these amazing people who help us support and all of this,
00:48 but I want to specifically mention Marken,
00:50 who originally started this project years ago,
00:53 built much of the tooling and many drivers,
00:55 and supported a lot of the other contributors.
00:56 And without him, none of this would have been possible.
00:59 So I'm very happy that he started this,
01:02 and he's probably one of the biggest contributors to this project.
01:06 We also have Alyssa, who let the graphics driver work,
01:08 and built a fully conformant user-space stack
01:12 for this completely unknown GPU.
01:14 We have Asai Elina, who wrote the GPU kernel driver.
01:16 There's Yane, who's doing downstream kernel maintenance.
01:19 There's people working on audio and all kinds of other stuff.
01:23 So without all of these amazing people, this would not be possible,
01:25 and I'm very glad that I'm working with these.
01:28 And probably I forgot a few of them here,
01:30 but thanks to everyone who made this possible.
01:33 And also thanks to all these people who contribute donations,
01:36 which allow us to buy hardware and fund development.
01:40 And finally, thanks to the people who actually helped me build the slides in the past days.
01:43 Now that being out of the way, let's get started with the talk.
01:47 I will roughly have three sections.
01:50 At first, I'll talk about the past a little bit,
01:53 how the project was started, and how all this tooling we have is built,
01:57 and how we actually reverse engineer hardware.
01:59 Then in the next section, I'm going to show how I use this tooling
02:04 to reverse engineer the Type-C parts and build USB support and so on.
02:08 And then at the end, we have a brief outlook about the M3, M4, M5 chips,
02:11 which are currently not supported in our project.
02:14 Yeah, just to give you a state and what our plans are there.
02:18 So, let's get started with the first one.
02:20 Apple, as many of you may probably know,
02:23 announced that they are going to move to our Macs
02:26 with their own silicon back in 2020.
02:29 And Marken, a few years later, announced that he would love to port Linux to these
02:33 and started a Patreon project so that he could work full-time on this,
02:38 and reached enough funding pretty quickly,
02:40 and then announced that he would do Asahi Linux in December 2020.
02:45 Mainly due to his work, in a few months later,
02:48 he already managed to get the first patches into the upstream kernel tree,
02:51 which allowed the thing to boot to a very, very simple shell.
02:55 But this was already the first amazing step,
02:56 because it got a lot of customizations and patching a few things around,
03:00 because these things are different quite a few ways from regular machines.
03:04 And then, about a year later, in early '22,
03:07 we released the first alpha of Asahi Linux for end-users,
03:11 so that they could install them in their machines and actually, yeah,
03:14 put a proper operating system onto this amazing hardware.
03:17 The one question that we always get is how is this even possible?
03:24 Because if you're familiar with most of Apple's devices,
03:27 these are fairly locked down.
03:28 So, you can't just boot, put Android onto your iPhones or to your iPads,
03:32 because Apple controls the entire boot chain,
03:34 they want all code to be signed,
03:35 and they just don't allow you to put any custom code on there.
03:39 And people assume that this must be the same for Macs,
03:42 but that's actually not true.
03:44 On these Macs, they just allow you to boot any code you want.
03:47 So, Marco is the binary format that Apple uses on their architecture,
03:51 and they put significant effort into engineering a system
03:55 that you can put, you run your own custom code on these,
03:57 without compromising the security of Apple's code.
04:00 So, what you have to do, you have to boot into what Apple calls
04:03 one true recovery, which has two conditions.
04:05 The first one, you have to be physically in front of the machine.
04:07 So, you have to long press the power button,
04:09 do a full hard reboot,
04:11 and there's some early stage code that that way makes sure
04:15 that you're sitting in front of the machine,
04:16 that there is not some malicious code running on your machine,
04:19 but that you actually intentionally move to this recovery thing.
04:22 And then you have to authenticate with a machine owner password.
04:24 A machine owner is usually the very first account created originally,
04:27 and only then you can just run a command.
04:30 It's going to be a bit scary because it's going to say,
04:32 you know, this is going to drop security, blah, blah, blah, and so on.
04:36 But then you can just run your own completely unsent code,
04:38 and this is intentional.
04:39 And with that code, you get dropped into EL2,
04:42 so that's exception level two for those of you
04:44 who are not familiar with ARM architecture.
04:46 And this is the highest privilege level on the CPU.
04:49 And there's no management engine on top.
04:50 There's no firmware running on top on the CPU.
04:53 There's no more Apple code running on the CPU.
04:55 It is running on coprocessors, but we don't care about that.
04:58 But on the main core, you get full control.
05:00 And like I said, this took some significant engineering effort from them.
05:03 This is not just an accident.
05:04 So they really made this possible.
05:05 And they just -- whoops.
05:07 They made this possible.
05:10 They just want to stress again, there's no exploits or similar required.
05:12 This is intentionally done by Apple.
05:14 So we are not even hacking anything here.
05:16 And not even avoiding the warranty.
05:18 And that is -- that is the guy who built this from Apple.
05:21 He's no longer at Apple now.
05:22 And he tweeted a while ago that he specifically designed this mechanism
05:27 to allow any code the user wants to boot.
05:39 And the way this works is you just create a secondary partition.
05:42 So in the beginning, the CPU will start from some secure ROM, which can never be changed.
05:48 It's going to load a boot loader called iBoot.
05:50 This is all Apple code from a North Flash.
05:52 And that boot loader -- you then have a boot menu where you can select what you want to boot.
05:55 And if you want to go to Mac OS on the left side, it's going to run with fully security intact.
06:00 There's no way to compromise this.
06:02 But you can also just go, oh, no, I want to boot this other weird thing here.
06:05 And you get a copy of a secondary partition and can run your custom boot object there.
06:10 And they have a nice boot picker, and you can put your own picture there.
06:14 So what we did is we just put our own pictures that when you click on there,
06:16 it's going to boot to Linux.
06:17 The one issue is, of course, that there's no documentation.
06:23 So we just get dropped to this CPU.
06:25 It's a single course running, and we have no idea how any of the hardware works.
06:28 And to get started to port Linux, the first things you need is an interrupt controller and a timer.
06:34 Without these, these Linux just cannot work.
06:36 But once you have these two, you can essentially already drop to a very, very simple Linux kernel.
06:40 At the time, it's just a normal ARM one, so nothing needed to change there.
06:44 But the interrupt controller is already the first custom device that Apple built.
06:48 And this is called the Apple Interrupt Controller, AIC for short.
06:51 There's no documentation.
06:52 Apple will never provide documentation, of course.
06:54 There's also no driver available.
06:56 But we know where this lives in memory.
06:58 And we also know how these registers from this controller are called from OpenStore X and Udrops.
07:04 And so what Marken did back then, because he knows the interrupt controllers work,
07:08 he just tried to, or his approach was to figure out, if I just poke these registers,
07:12 can I figure out how this hardware actually works, and then build a Linux driver?
07:17 But one problem is that if you just start by this and running your own code,
07:22 you always have to go back to one-true recovery, do a full boot cycle,
07:24 and usually the first iterations just don't work all that well,
07:28 so you're going to spend a lot of time rebooting, installing another boot object, and so on.
07:32 This is very annoying.
07:33 So, instead, he focused on building reverse engineering tools that makes this life much,
07:37 much easier.
07:39 And we've all met back then from console hacking, and back on the Wii was the first time we used this.
07:43 We built something called Mini, which is a small C application, stand-alone,
07:47 a freestanding C application that provides a small proxy of a UART.
07:52 And on Apple Silicon, you can find this UART over some of the USB pins, so you have a hardware UART.
07:57 And then this way you get a Python shell on a separate machine, which allows you to poke and play with hardware registers.
08:02 So, if you want to try out anything, you don't have to put a new -- write new C code, reboot again, and whatever.
08:07 You just have a second machine, have a Python shell there, and you just play with the hardware.
08:12 And it's going to send this over to the other one, execute it, and return back the results.
08:16 And this makes reverse engineering so much easier,
08:19 because you can, for example, write something that reboots your machine.
08:22 So, if you mess the hardware up in some way, and you need a full reset, yeah, just run Python reboot on your main machine.
08:29 It's going to reboot your target.
08:30 You can also use a shell.py, which drops to a small Python shell,
08:35 and then you can send, read, or write commands over the UART to the Apple Silicon device,
08:40 and figure out how the hardware works.
08:42 You can also allocate memory there if you need a memory buffer and put stuff there.
08:45 You can monitor MMO ranges.
08:47 There's a lot of work went into just making the reverse engineering as awesome as possible and as quickly as possible.
08:54 And you can prototype drivers and have a very, very quick development cycle.
08:57 And this is really important if you reverse engineering unknown hardware,
09:00 because you will need a lot of trial and error to figure out how this works.
09:04 And you don't want to do this inside the Linux kernel, which is also a very complex piece of software.
09:08 You want to do this as easy as possible, as quickly as possible,
09:11 so that you first know how the hardware works and how to drive it,
09:14 and only then go ahead and write the Linux driver.
09:18 And just an example of how this works, so this is --
09:22 so back then I wrote an I2C driver, and I wrote the first prototype inside Python.
09:27 And what you can really do is you set some registers at the beginning,
09:31 then write some registers, and then pull some other register until the transaction is done,
09:36 and then write the results back in there.
09:38 And this is all -- before I even touched a single line of Linux kernel code,
09:41 it's really writing Python on a host machine, and seeing how this reacts and getting this to work there.
09:46 And once it works, then you at least have a hardware model in your head,
09:48 and then you can write the driver, and at that point you know that all the bugs are probably in your driver code,
09:53 and not in your understanding of the hardware.
09:55 And this makes for a really amazing development experience.
10:00 And this blindly poking registers, it works quite well for the simple hardware, like for the I2C bus,
10:06 because there's only so many ways you can build an I2C controller,
10:09 so you can fairly easily just guess how it works if you find register names or something.
10:14 But these machines also have much, much more complicated hardware,
10:18 like the GPU, for example, or some of the co-processors,
10:21 where you have no chance in hell by just guessing to figure out how this works.
10:26 Now, what you could do is you could load XNU into a disassembler,
10:30 insert into disassembly, but I've done that enough in my life.
10:34 I don't want to do that anymore.
10:35 It's also fairly annoying code, because it's written in C++ and I/O kit,
10:39 and usually the drivers are spread across multiple kernel extensions,
10:42 so it's just not fun to reverse engineer all of this,
10:45 and we'd much rather observe how this drives the hardware.
10:48 And what we can do, or what Markin then built was that,
10:52 was built a way to observe how XNU drives the hardware.
10:56 And so MMIO, autonomous memory mapped I/O, that's how we talk to the hardware.
11:00 EL2 is the highest exception level running on ARM machines,
11:04 and usually XNU runs in there, and below the down you have user space,
11:07 which we don't care about much at this point,
11:09 and what we want to figure out is how does XNU talk to this MMIO interface?
11:13 And what he did was to push XNU one level below,
11:17 run it inside a VM essentially, a very custom-built VM,
11:20 and make Mini act as a hypervisor, or M1N1 act as a hypervisor.
11:25 And we just originally just map all the hardware through,
11:28 but then we can say, okay, this one piece of hardware we are interested in,
11:32 we can change the page tables in such a way that every register actually results in a trap.
11:38 So when XNU tries to access the hardware, it goes to Mini.
11:42 Mini then logs that register access, and sends it over USB to another computer.
11:47 And that's the setup we then have then, it's again a device under test where we have all this running,
11:53 then a host machine where we get essentially a trace of what the hardware is doing,
11:56 and this can also all be configured, because a lot of tooling was built around this.
12:00 And one example you can do, if you have figured out some registers, you just define these in Python.
12:06 So this is for the USB 2.5, you just, because XNU is very, very verbis in this debug output,
12:11 and if you correlate the debug output with the register writes,
12:14 you can then just write down, yeah, this is how the register looks like.
12:16 You can build a tracer and just start this tracer on the host machine,
12:19 and then it's going to pretty, pretty print all the X's that XNU does for you.
12:25 And this way you can iteratively construct your understanding of how this hardware works.
12:29 So at the beginning it's just raw MMIO writes,
12:31 but then you get an understanding of what some of these registers do, you give them names,
12:34 and then you see in your log, you see how these registers slowly take shape,
12:39 and you slowly build an understanding of how all this hardware works.
12:41 And this works really nice for very complex hardware.
12:43 And if you have put your effort into building all this tooling,
12:48 this is what you then see in your host machine.
12:50 For example, in this case, you can see some system register access from XNU.
12:54 You can see some MMIO access in the power manager region, how it changes some registers.
12:58 We can also emulate hardware, of course.
13:02 For example, if you want some hardware, we don't want XNU to touch in certain ways,
13:06 because we maybe have messed with it a little bit.
13:08 We can emulate this.
13:08 For example, the starting CPUs, we just emulate inside the VM.
13:13 Or we can just print all the accesses we see on that other machine.
13:18 And this usually allows to reverse engineer much of how this works without even opening this assembler.
13:22 Now, sometimes you need to, because sometimes you can't tell from where this is going.
13:26 But I'd say that for most of the hardware, this is much, much quicker,
13:28 because it kind of feels like if you're a software guy, there's stuff like strace,
13:33 and this kind of feels like you're building strace for hardware.
13:36 And this makes for a really nice developer and reverse engineering experience.
13:40 And now, with all this, so you reverse engineer all the hardware,
13:44 and the result of all this is essentially you pipe curl to bash,
13:48 wait 10 minutes, follow the instructions,
13:50 and then you get a fully working Linux distribution with graphic acceleration
13:54 and lots of hardware support in your machine.
14:07 Now, and this was, like I mentioned early, in the beginning, there was a lot of feature work.
14:11 So we built a lot of features, but we didn't upstream these originally.
14:14 So we had like over a thousand downstream patches in our fork of Linux.
14:17 And that's just not a great situation to be in.
14:20 It's great if you want to build features quickly,
14:23 but it's not so great if you actually want this to be sustainable.
14:26 Because in the end, we don't want to build the distributions for Apple Silicon.
14:30 We just want all distributions to just work.
14:32 And the only way to make that work is to upstream these patches into the Linux kernel.
14:36 And that's when earlier this year, some of you familiar with the project might have known that
14:40 a bunch of people stepped down because either they didn't want to work on this anymore,
14:44 or you can read all about it online, I'm not going to get into that.
14:46 But we also used that to change the approach and say,
14:49 okay, now we're going to fix our technical debt, essentially.
14:52 We're going to clean up all these drivers and get them upstream before we focus on new features.
14:56 And so this year was a lot of upstreaming about the system controller, USB 3,
15:00 some audio stuff, not all of it, parts of the GPU, especially the user space.
15:05 And we're in much better shape now.
15:07 We still have two fairly big kernel drivers, the display controller and the GPU,
15:10 that are only downstream, but we will continue to work on pushing these upstream.
15:15 And, yeah, that's our current focus, just upstreaming all these things before we build new features.
15:21 And now that I've talked about how the reverse engineering tooling and all that works,
15:29 I'm going to walk through how I used all of this to reverse engineer the USB ports and what makes
15:34 them so special and why it's so complicated to support them on these machines.
15:40 Now, USB is fairly old. In 1996, the first version dropped 1.5M bits, they called it low speed.
15:48 Two years later, they realized this is actually quite nice of a bus, so we could need more bandwidth.
15:54 Let's do full speed. Well, eventually this was also too slow and after full obviously comes high
16:01 speed with USB 2, so this is 480 M bits per second. Again, today this is still slow,
16:06 so we're going to do super speed with 5 gigabit per second. Because this naming was not confusing
16:13 enough, we're going to do super speed plus with 10 gigabits per second. Now, at this point,
16:20 even the USB implementers from realized that maybe the naming gets a little bit of confusing because
16:24 if you tell a random person, oh yeah, I have a USB full speed device, you would expect it's very
16:29 fast. Well, it's actually very, very slow. And so what they finally did was that they,
16:34 when they introduced the 20 gigabit version, they just called it USB 20 gigabit per second,
16:38 and they changed all the names. I mean, not all of them, they kept low, full and high,
16:42 but all the others, they just changed the marketing names to essentially just how fast
16:47 the USB was actually is to make this a bit easier. And then in 2019, we got USB 4, which does up to 40
16:54 gigabit per second. And USB 4 is a little bit special because it's not just another iteration of
16:59 USB with more speed. This is actually Thunderbolt, open source version, because the specification is
17:06 fully available. It's very, very long. And USB 4 then also supports stuff like PCIe tunneling and
17:13 display port tunneling and so on. But the point I'm trying to make here that this is a very complex
17:19 piece of hardware with lots of moving parts that we all have to get supported. And while improving
17:24 the bandwidth, they also need new connectors at some point. So originally, probably all of them are
17:29 with USB A and B and there's mini and micro, and then the USB 3 variants and so on. So lots of these
17:35 connectors, you probably know how it goes. You always need three tries to get these things working.
17:42 And at least in theory, with USB-C, this is no longer the case. So this is the new connector
17:49 which has a lot of pins. It's symmetric, so at least in theory, you can just rotate it and everything
17:54 will work fine. And this can also carry a lot of different signals. And all the modern Macs have just
18:00 many of these parts essentially. And finally, you can at least -- that's the goal. Everything is correct.
18:07 One caveat, because people implement this wrong, you can -- even with USB-C, you can make it such
18:18 that it only works in one orientation. They have a bunch of CURS devices that only work in a single
18:21 orientation or cables. So unfortunately, this does not work out. But otherwise, this is a fairly amazing
18:26 connector. And like I mentioned, it supports, on these machines, four different protocols. There's
18:33 the very low speed USB 1 and 2. There's USB 3. Then you have Thunderbolt or USB 4. And finally,
18:39 the display port. And we have to get all of these things supported. And the way this is usually
18:46 implemented in hardware -- this is the very rough overview diagram -- you have USB 2, which still to this
18:51 day uses separate lanes. So you still have the D plus and D minus. It's almost differential signaling.
18:55 And even if you buy a modern USB 4 or Thunderbolt 4 dock, you will still have these D plus and D minus
19:01 lanes. And that dock will contain a USB 2 hub, because this protocol cannot be tunneled over all of the rest.
19:06 The rest all happens over these high speed lanes. In USB, it's SSTX and RX for super speed transmit and
19:13 receive. And there's a bunch of differential pairs in this controller. But because you have different
19:17 protocols you need to negotiate, because both ends need to speak the same language. And so you need
19:22 some way to figure out which protocol do we want to speak. And that happens over the line that's called
19:26 CC and the USB power delivery controller. And this controller, when you plug it in, it's going to
19:31 negotiate and figure out which mode both ends support. And then decide what the hardware is going to do.
19:38 And then there's a switch that switches these high speed lanes to one of the three different hardware
19:43 components. But this also means that to get this working, there's at least five or six moving parts
19:48 in here that all have to work together. And on Apple hardware, none of these are documented.
19:52 Obviously. And this was the situation when I started. But with the reverse engineering tools we have
20:03 built, I'm going to now walk you through how you can approach this and this really big problem and
20:07 tackle it step by step and make these things work. So from the iPhone hacking scene, because these
20:13 machines are essentially, at the beginning, very big iPhones, because they built these chips,
20:19 Apple Silicon over the past 15 years or so, and at some point realized, oh wow, these chips are so
20:24 fast, we can actually build desktop machines with that now. And we know from the iPhone hackers that
20:28 Apple used to use this Synopsys design controller version 2. So maybe they're still using it on these
20:34 machines. And what I figured out pretty quickly by just dumping the MMIO from my computer and comparing
20:39 it with known USB controllers, it's almost the same as the version 3 of this controller. And
20:45 as it goes, you can't find an official data sheet for this unless you're a big company and willing to
20:52 sign NDAs and so on. But all the registers, if you just Google for one of these register names,
20:57 you find data sheets by Intel or Rockchip who have built SOCs and just include all the documentation in
21:01 there. So I'm not sure why the official one is NDA because you can find anything in the open anyway,
21:06 but usually that's how it goes. So the first step is always use Google, try to find, even if the
21:11 official data sheet is under NDA and you can't find it anywhere, maybe it's just in some other SOC
21:16 documentation where you get enough information of how to implement these things. There's also Linux
21:22 driver for this, which is very nice. And it even used to be dual licensed BSD and GPL many years ago,
21:27 so even if you're building a more permissively licensed software, you can still use this as a
21:32 reference. They just switched, I think it was in 2016, you just have to go back to the old commits
21:36 and make sure you don't use any of the newer code. So this is awesome. This is essentially documented,
21:41 we just have to implement this. And back then, to use a mini and all the servers engineering,
21:47 you had to build a hardware serial, and that was fairly annoying because you have to speak USB PD,
21:51 you have to build a 1.2 volt level shifter and all that. And so the first thing I did was just
21:56 implement a USB gadget mode. So CDCACM is just the USB class that are serial and implemented in M1N1.
22:03 It seemed to only work once. So when I disconnected the plug and plugged it in again,
22:09 it just never showed up a new connection event. I just thought, okay, I just messed up. It's probably
22:13 a dumb bug somewhere. This works fine for now. Let's instantiate this in Linux and see what happens.
22:20 It also only works once. So if you connect it, the first time you see the connect event and
22:25 everything goes on. But the second time it just doesn't show up and you don't see this. So this
22:30 seems weird. And the other part is that this only works in device mode. So I could emulate a serial
22:37 device or a USB storage on the Apple Silicon machine, but I could not plug in a USB stick or any device
22:44 that you would actually want to use. And my suspicion was because I then checked that those devices just
22:50 don't get power because we never touch the power controller at this point. And
22:54 Type-C can carry a lot of power these days, up to 240 watts. So I was told a few days ago that there's
23:01 only one device that actually uses this full power. But apparently you can carry quite a bit of energy
23:06 over these lanes. And so you need negotiation because otherwise you're just going to burn down all your
23:10 hardware. If like an unexpected small phone suddenly gets 240 watts pumped into it, that's not going to go well.
23:16 So this is why there's always power negotiation going on. And why the default mode is don't apply
23:20 any power at all. That's this protocol over the CCLens that I mentioned a little bit earlier.
23:25 And like I also mentioned, this also does alternate mode negotiation. And on Apple Silicon machines,
23:33 this is handled by a TI-based USB-PD controller. It's not called TI anymore. But there were some
23:41 references in the code in the strings where you could see, okay, this is probably an old TI chip.
23:46 On the left side, you can see what I did there because there was some details there to figure out
23:49 from the firmware. So I got one of these from, I think it was AliExpress, and dead bugged it,
23:55 connected JTAG, and dumped the whole firmware of it to figure out some details. But this was just at the
23:59 very end. The more important part is there was a Linux driver. And we just had to change -- by tracing it all
24:05 again, we just had to change a few things and actually make this work. And then, yay, USB host mode works.
24:10 But it still only works once. So you get to use your Linux device or your USB device exactly once,
24:17 which is not really what you want to do because these days you plug in USB devices a lot, so it's
24:21 time to figure out why this is going wrong. And this is again a situation where this reverse engineering
24:26 tooling that was built is so amazing. Because if you wouldn't be able to do dynamic tracing,
24:32 you would now have to go and figure out, okay, why does this only show up once? And you could go down
24:35 into so many rabbit holes because USB 3 or even USB 2 is such a complicated thing to drive. And you
24:42 could go into -- there could be so many things going wrong and you probably would take a long time to
24:46 figure out why you never see a second plug event. But if you have built all this tracing and identified
24:54 many of these registers, you just boot into XNU, plug a USB device, unplug it again, plug it again,
24:59 and then take a look at the lock you got over USB to figure out what's going on.
25:02 And in this case, what I found very suspicious was that suddenly after I plugged it out,
25:08 like a few milliseconds later, it asserted a full reset of the USB controller.
25:13 Then it asserted a reset of the port and the bus, and then it gated clocks and everything. So it turned
25:20 the whole system completely off. And then when I plugged it in again, it did the whole thing in reverse.
25:26 So it turns out that their controller is so messed up that if it's been initialized once,
25:30 it can never be initialized again. So you have to -- every time you disconnect the USB device and
25:36 connect it again, you really have to tear down this entire thing, assert the hard reset line,
25:41 and then bring it back up again. So like I said, you get the interrupt from the PD chip with a hot plug,
25:48 you then only power it up and bring everything up. Hopefully USB device works, usually does.
25:53 Then you unplug the interrupt, and then you have to tear everything down and do a hard reset again.
25:58 Now -- and this has to be implemented inside Linux, which I tried, and my original implementation was
26:08 pretty hacky because I used the USB roll switch code to hack this in there. And we kept that for over a
26:13 year because I thought, oh yeah, this is quite ugly, I'm not sure how to do this correctly, and so on.
26:17 And this is one of the things where I can only encourage you, if you're working on Linux,
26:19 try to do upstream as early as possible. Because when I sent around these series, like two days later,
26:26 the maintainer of the driver mentioned, oh yeah, this is probably not a nice way to do it,
26:30 but I have a much better way. How about you do it like this? I was like, oh wow,
26:33 I should have done this from the beginning. So upstreaming not only helps you maintain your code,
26:38 it also makes your code better, because there are people out there who really understand the driver
26:42 and know precisely if you tell them what you need to change for your hardware, they can essentially
26:45 tell you, oh yeah, this is the best way to do this. And so again, upstreaming, good idea,
26:50 do this as often as possible. And with that, we have USB 1 and 2 fully working. And now I could show
26:58 the same slide for USB 3 and so on, but it's really the same process. So USB 3 is a bit
27:03 more complicated because it's much, much higher speed and USB 2 is fairly tolerant of the hardware
27:10 you put in there. With USB 3, you have to link training and make sure that the bits you send
27:14 on the one end arrive the same on the other and hand ends on this clock recovery and all this kind
27:18 of weird hardware magic. And so what you do is just the same thing. You just observe the MMO again,
27:25 see what it's doing, try this with a few different devices, and then just write a Fi driver. And this
27:30 was finally upstreamed a few days ago. So now we also have USB 3 support in the upstream kernel.
27:37 Next one is the DisplayPort. And DisplayPort, people really want this. So I just, this morning,
27:51 I randomly just looked for, when I was finishing the slides, I just looked for people asking for
27:54 DisplayPort. Of course there's someone, you know, it works like charm, it's everything great, but my
27:58 external screen doesn't work. Then you look into our subreddit, you always find on the first page
28:05 someone wondering, you know, why does DisplayPort not work and so on. And essentially because it's
28:10 complicated, there's a separate display controller that you have to set up and speak a very weird
28:18 serialization protocol, actually two or three of these, because one is not enough. And you have to build
28:24 this all together with the USB 3.5 that was upstreamed. And then you can maybe get DisplayPort
28:29 working. But for those people who are in the audience here, you can probably see I'm giving this from an
28:35 Apple Silicon machine. And this is running Linux, just as a proof. So we finally have this working now,
28:42 and I was very nervous if this actually worked out today. And this was really mostly Janne spending
28:56 the last days and he gave me kernels and we tested it and so on. And now it seems to be in a state
29:01 where it's mostly working. Let me see if I can full screen again. And we pushed the code that enables
29:07 this probably a few minutes ago. Right now it's meant for developers to help us iron out the last
29:12 box. So if you want to contribute, this is one example. You could take a look at this tree, compile your
29:16 kernel, see if it works with your screens, and if not, see if you can debug this and figure out why it
29:21 isn't working. And our goal is to make this generally available for all people sometime early in the
29:26 next year once we've done a few more developer testings and so on. But if you're a developer,
29:30 feel free to check this out and feel free to help us bring Linux to these awesome machines.
29:35 And yeah, with that we have DisplayPort. And speaking of contributing, I
29:42 I promised that at the last part of the presentation I was going to talk about the future of M3, M4,
29:48 and M5. And like I mentioned earlier, we are all pretty much focused on upstreaming and on new
29:54 features, which is the right move in our opinion, but we were hoping for new people to help us,
30:01 especially with M3, M4, M5. We'll get to them eventually, but new contributors are always nice.
30:07 And one of the things is that so many drivers are already built that the differences between
30:13 something like M1 and M2 and M3 are generally minor. Because Apple also has no incentive to
30:18 change their hardware much. Because it costs them money, they have to ride all their drivers again,
30:23 then they have new bugs again, they have to fix them again. So they really have no incentive to change
30:27 a lot on most of these machines. And I mean, they might do minor changes, like moving one chip from
30:32 one bus to another, because they found a new source for the chip, or because they want to reduce bomb
30:36 costs, or because maybe just the routing of the PCB is better. But these are very, very minor changes.
30:41 And, or maybe in some NVMe, they change how some register works, or how the interrupt controller
30:47 works, how co-preservation communication changes. But these are all things that if you're an
30:52 experienced systems engineer, and you can probably figure these out. So this is,
30:56 it still takes some time and some experience, but this is not really rocket science, because we have
31:02 all these amazing reverse engineering tools. The one exception is the GPU, because there's still
31:06 innovation happens, because, you know, AI and all that, and whatever they need, there they're still
31:10 building innovation. And especially on M3, this is going to need significant work to get this working.
31:14 But the good news is that, so these were just our assumptions. And we were hoping someone would
31:21 step up, and the good news just that happened. So we had a new contributor, Integral Pilot,
31:25 who has been working on M3 Bringup in the past weeks. And yesterday he shared some screenshots
31:30 and some progress, and it boots on his, I think it was the M3 Max now, with all the cores, it boots
31:35 the storage works, he fixed all the minor issues that were around. There's no graphics acceleration yet,
31:40 which, as I said, will take a longer time. But you can, of course, already play Doom.
31:54 And again, we're going to continue working on this. This is still in the very, very early stages,
31:59 but hopefully next year we will continue to make progress there.
32:02 Then, I have a few more minutes left, not a lot, but I will very briefly talk about M4 and M5.
32:08 In general, it's the same story, so probably minor changes again here and there.
32:12 Not a big deal, but there is one unfortunate change that happened, which makes this a little bit more annoying.
32:19 In the beginning, I told you that we have X in U and EL2, like the highest level, and use it to trace the MMIO.
32:25 But this was kind of a lie. It's a bit more complicated, because Apple likes to extend the ARM
32:32 architecture, so the ARM architecture to their liking. And what they did in this case introduced
32:38 something they call guarded levels, which use the same page tables as the normal level, but they modify the
32:45 permission bits in the page table. And they do this because X in U is a fairly large attack surface,
32:51 and they don't want you to be able to already own their whole system if you just find any bug in any
32:55 random kernel driver. So one of the things they did, they introduced originally the iPhone to this page
33:00 protection layer, which runs in a higher privilege level, and only that one is allowed to modify page
33:04 tables. And this is all fine. We reverse-engineered all of this, and we're virtualizing all of this on M1 and M2,
33:10 and M3 as well. But with M4 and M5, they disable these instructions for us. We don't know why. Maybe
33:17 it is because ARM doesn't like them doing this, or doesn't like them allowing everyone to play with
33:22 these instructions, because they like to keep the instructions set very, very straight and contained.
33:26 But that's just our best guess. But still, we now get on M4 and M5 to boot modes. One of them is if you
33:33 want to boot X in U, you get dropped into EL2, but Apple code is already running, and it's running
33:39 in GL2, and you can't port Linux to this mode. But Apple still added another mode, because like I said,
33:44 this has all been intentional. They want people to be able to run their own code with what they call
33:49 raw boot objects, and there you get dropped into the same mode that you always have been. So you can
33:52 port Linux just fine to this. There is no roadblock. It's just someone just needs to do the work.
33:59 The only problem is that this breaks our reverse engineering tools, because we now can't virtualize
34:04 X and U anymore. We can't do the hardware as tracing anymore. So this makes reverse engineering much,
34:09 much, much more annoying. But if anyone wants to step up and help us with these problems,
34:13 I've already talked to a few people in the past few days. I'm going to skip over the solution,
34:17 because I don't have that much time anymore. But feel free to reach out to me. I'll be at the
34:22 Failoverflow assembly for another hour or two or so before I leave. You can also send me a mail,
34:26 follow me on Fedi wherever, go to our website, see what's installed, or just come to IRC if you want
34:30 to contribute in any way. And other than that, thank you very much for your attention. We hopefully
34:35 have time. Yeah, we still have a bit of time for a Q&A now. Thank you.
34:38 Yes, it's only a little time. So if you want to have, if you have a question, stand up, go to the mic,
34:52 and we try to do this really quick. And we start immediately with microphone number one.
34:56 Hi, thanks a lot. I love my SI, and I will love it even more after this part is working. Thanks for
35:02 the great work. The quick question, like, what's your thoughts on why Apple really allows to run
35:08 other code on their hardware? It doesn't seem there's any incentive for them.
35:12 I mean, I can just guess. My best guess is that they're selling a computer,
35:18 and with a computer people expect to be able to run their own operating systems. With an iPad or an
35:24 iPhone, people just don't expect that anymore. So that's my best guess, but I have absolutely no idea.
35:28 Microphone number two. Thank you for the talk. I have a question on your slide about the USB and
35:37 DisplayPort support. You conveniently skipped over USB 4 or Thunderbolt support. What's the state of that?
35:44 The state of that is that's what I'm going to work on next. So the DisplayPort is mostly Yane at this
35:49 point, and I'm going to start working on USB 4 and Thunderbolt now. We roughly know how it works,
35:54 and there's also a nice driver in the kernel from Intel that does most of the work we need. We need to
35:58 adjust that a little bit. The thing that's going to be tricky is PCI hotplug, because at least when I
36:04 was trying, especially if you plug very complicated Thunderbolt chains like a dock after a screen after
36:08 a dock or whatever, the hotplug code is not very happy with that. So we'll have to do some work there
36:12 probably, but we have a rough idea how it's going to work, and hopefully sometime in the next year,
36:18 but I'm not going to promise any timelines. Thank you. Last question from microphone number four.
36:24 Hi. Thanks very much for the work and the talk. I've got a question about bus topology with USB,
36:33 because you mentioned that if you essentially plug and unplug devices, you need to restart the
36:40 controllers. How does this affect all other USB devices on that bus?
36:46 So you only have to tear down the entire controller if you unplug something at the root parts of the
36:54 MacBook, and each of the ports has their own USB controller. So you only have to reset that one
36:59 part and all the rest is going to be totally fine and not break, and so this kind of just works out.
37:04 Thanks. So really last question, because there's still people lining up. Sven, it's from me. Sven,
37:11 when people still have questions, where can they find you during the last day of Congress?
37:16 Yes. So I'll be hanging out at the Failoverflow assembly in, I think it's hall three or four,
37:20 so you can find it on the navigation, and I'll be there until probably around two or so today.
37:24 So just come by, stop by, say hi. I'm happy to answer all your questions there. Otherwise,
37:27 just reach out to me on, yeah, some other way. Thank you very much, and this is your applause.
37:32 Thank you very much. Thank you very much.
37:41 Thank you.