Since I bought a second-hand Leica X2, a twelve-year-old camera, I have started playing with photography again after a long break. In the past, I used to develop my photographs in one of my rooms that I turned into a dark room. Those were times!
Digital photography is the way to go for me right now. I already talked in the past about why I bought a second-hand camera. Look back here, and you will find why if you are interested.
Yesterday I was working on my computer with some photos I had taken earlier.
Suddenly, I stopped and started thinking if what I was doing was right or at least fair.
Should photos remain the same as you took them when you pressed the shutter?
Are you altering reality when you retouch a photo?
Making these adjustments changes, often dramatically, what the camera sensor has captured. The result is a representation different from what the sensor saw initially. As a result, you are altering the “reality” of the photo. The fundamental question remains: did the camera sensor capture a snapshot of reality?
This question is challenging to answer.
In the first place, a photo is not what our eyes see. Our eyes capture reality in three dimensions, and our brain often corrects what we see to make sense of it. A digital camera makes a two-dimensional representation of what it sees through the lens.
Then comes the camera lens itself. The focal length, the aperture, the shutter time, and the lens’s optical construction make a vast difference between what our eyes see and what is recorded on the camera.
If you read carefully what I have written so far, you may notice that I am almost writing about the path of light from the subject to your computer.
After the light has traveled through your camera lens, it will hit the camera sensor. As you may know, a camera sensor converts light into digital data. Every sensor is different, giving its own “interpretation” of what it sees from the lens. Even if you take two identical cameras, the two camera sensors will have slight, unnoticeable differences because of the manufacturing process.
Then comes the camera firmware that will interpret the digital data coming from the camera sensor and convert it to data, typically a file format, that a computer can deal with. This firmware significantly impacts the final result of the photo you will see on your computer. If you think about analog cameras, it is about the same because the light will impact a physical film, and each film has different characteristics.
Finally, the photo lands on our computer, and we retouch it. Contrast, white balance, exposure, color, and so on.
You can understand that what you are looking at on your computer has already been heavily manipulated during travel from the original subject. What has not changed is the subject or the “moment.”
When you retouch the photo, you add your interpretation of the moment.
Marc Bloch wrote that history does not exist. Interpretation of history exists.
We can apply the same concept to photography.
When we shoot and retouch a photo, we offer viewers our interpretation of a moment.
I understand some people suggest not to retouch photos. I don’t want to argue with them. Each approach is valid. I notice that even if you do not retouch your photos, those photos have already been heavily altered from what our eyes saw.
I am an avid reader. The Kindle app on my iPhone tells me that in the first three months of the year, I have already read sixty volumes, to which must be added the ones in paper format that have never found a place in the Kindle. When I read a book, I cannot help but immerse myself in it. I take notes, underline, dog-ear the pages of paper books, just as I slip anything I can use as a bookmark between the pages. Once upon a time, I used to transcribe the contents of my underlined passages onto paper notebooks, for future reference, so to speak. With the advent of the Kindle, I stopped doing it, perhaps mistakenly, but continued to underline digitally. I regularly synchronize these passages to my computer using Obsidian and its dedicated plugin for this purpose. From time to time, I find myself going back to these notebooks and their equivalent in Obsidian. I ask myself the question: what did I underline back then? Over time, I have become convinced, as Salvo Montalbano would say, that some books come into your life just when you need them. They are perfect for that specific moment in your life, not a week before or a week later. They are “right” only at that moment. The same is true for the passages I underlined. They were important to me only at that moment. Exceptions are rare. For this reason, when I browse through my old notebooks, I often wonder what the heck I was thinking at that moment and why I felt it necessary to highlight that particular passage. I (almost) never manage to give myself an answer, and that’s okay.
I was not surprised to read that Apple killed the Epic developer account because it had proven to be “verifiably untrustworthy.”
Nevertheless, I have heard of other developer accounts being killed for no reason. Epic has more legal power to confront Apple in that regard, but most developers don’t.
At the same time, we, as users, do not have the power to confront these kind of decisions. Apple, Facebook, Google, Amazon, eBay, PayPal, you name one. Every single one of these high-tech moguls has the power to terminate a user account. From a user standpoint, you can appeal this decision. Best of luck if you try. Most of the time, you cannot even talk with a human.
We depend incredibly on online services that do not guarantee you will be there over time.
Just think about email. I have a free Gmail account for most of my private stuff. I use it to register for services that I need daily, such as home banking, health insurance, utilities, and so on.
Well, Google may decide to kill my email address, and there will be a little bit more than nothing I can do.
Sincerely, this is a complete mess. We are not free online anymore.
The strict relationship between different online services has become so strict that we are confined in a cage without even knowing.
There’s more.
As I have written, my house is completely automated with Home Assistant. For Home Assistant to work, I need integrations with other services. Philips HUE for lighting, Netatmo for climate control, Blink for my surveillance system, and Garmin Connect for my health data, to name a few.
There are implications with this approach.
Any of those providers may decide to get out of business, or they may decide to kill their APIs. If Philips will do this, I will not be able to turn my home lights on and off again. I do not have any control over these services.
Then comes privacy. When I switch a light on, Philips will know I have done that. When I switch off my bedroom lights, Philips will know I will sleep. I can assert that my behavior in my house is completely monitored by someone out there. The rationale that keeps me going like that is that the benefit I receive is more valuable than the personal data I am giving away.
This does not change the fact that this is wrong.
I am exploring options to disconnect everything from third-party services and keep my personal data private. I am a tinkerer, and I can make it. The vast majority of people out there can’t.
I should do the same for e-mail and host it on my home server. This sounds easy, but it is not at all. Just do a quick search on personal e-mail servers, and you will find out that this is easy from a technical standpoint, but all the anti-spam infrastructure out there will make your efforts vanish in a breeze.
Shakespeare wrote, “Something is rotten in the state of Denmark.” We can say that something is rotten in the state of the internet.
I must admit, I have a deep-seated affection for vi. It harks back to the eighties, a time when I first ventured into the world of computing. Yes, you might say I’m a bit of an old-timer when it comes to these things. Back then, the journey into vi’s realm was not for the faint of heart. The learning curve resembled a steep mountain, and the resources at our disposal were as sparse as a desert oasis. No YouTube tutorials to guide us, no online courses, just the enigmatic man page and a few knowledgeable colleagues sitting beside you, ready to share their wisdom.
Even in the present day, I find myself returning to vi, especially on my trusty Mac, whenever I need to accomplish a task quickly and without fuss. It’s a testament to vi’s timeless utility. These days, we’re spoiled for choice with an array of plugins available for both vi and its modern incarnation, vim. Such luxuries were unimaginable during my early days with the editor. With the right plugins, you can effortlessly transform vi into a highly efficient and lightning-fast integrated development environment (IDE).
Of course, I’ve dabbled in more modern IDEs like Visual Studio Code and PyCharm when the need arose. But vi remains my ever-reliable problem-solving companion, a tool that feels like an old friend.
Besides my Mac, I also have an Acer PC running Kali Linux. It’s my go-to for tasks I’d rather not associate with my company-provided personal computer. While setting up vi on this Linux box, I decided to seek out some visually appealing configurations. I found one that piqued my interest, and I was all set to embark on the installation journey.
The installation process appeared straightforward enough: copying files, moving others, and overwriting some. On a Linux system, the common practice often involves using ‘wget’ and piping the output to ‘bash.’ However, I operate my Linux box with a standard user account, avoiding administrator privileges whenever possible—a basic security practice, in my opinion.
Here’s where my caution kicked in. I didn’t personally know the individual behind these configuration files. How could I entrust one of their scripts to run on my machine? To alleviate my concerns, I delved into their GitHub repository and scrutinized the scripts intended for installation. Much to my relief, nothing nefarious jumped out at me, so I proceeded as per the recommended installation steps.
Reflecting on this experience, I realized how easy it would have been to run an unknown script from an anonymous user without a second thought—a rather reckless notion. There’s a substantial risk involved, with the potential for harm to you or your computer looming large.
Indeed, a malicious actor could easily clone a well-established GitHub repository, surreptitiously insert malicious code, and unwittingly infect the machines of unsuspecting users. It’s a stark reminder that one should exercise caution and discretion in the digital realm.
In light of this, I’ve resolved to be more discerning in my online interactions and exercise due diligence before running any scripts from unknown sources. After all, a little extra vigilance goes a long way in safeguarding oneself and their digital environment from potential threats.
In the world of electronics and microcontrollers, Abacuc is slowly but surely making its way. With meticulous precision, the task at hand involves the intricate implementation of 6522 logic into its framework. Abacuc has delved deep into the 6522 datasheet, scouring its contents for understanding. However, some sections of the datasheet have proven to be enigmatic, leaving Abacuc with more questions than answers.
In a quest for clarity and practicality, Abacuc has taken a bold step – the acquisition of a genuine 6522 chip. This tangible piece of hardware serves as a tangible bridge between theory and practice, allowing Abacuc to put the 6522 logic to the test in a real-world environment.
The grand vision that Abacuc is pursuing involves the construction of a 6502 single-board computer, a testament to its passion for microprocessors and integrated circuits. At its core, this ambitious project comprises a 6502 microprocessor, a 6522 Versatile Interface Adapter, a ROM chip, a RAM chip, and a plethora of logical components. Together, these elements will orchestrate a symphony of operations, unveiling the precise timing and control register status – two areas where Abacuc seeks enlightenment.
Taking inspiration from the ingenious work of Ben Eater, Abacuc has embarked on a journey to recreate and understand the inner workings of these intricate components. The design contemplates the utilization of a 1 MHz quartz clock, reminiscent of the original Apple 1, or an external clock module driven by trusty 555 chips – ideal for generating single-step clock pulses or ultra-low frequencies.
The heart of the matter lies in the need to comprehend the exact behavior of the 6522 chip and its pin-level statuses. To achieve this, Abacuc requires a clock source that not only keeps time but can also pause it, allowing for meticulous examination and single-stepping. Fortunately, the 6502 and 6522 components chosen are of the full static design, offering the flexibility to freeze the system at will.
In a twist of fate, the absence of spare 555 chips on Abacuc’s workbench led to a creative solution. Enter the Raspberry Pi, a versatile tool with PWM capabilities, albeit a tad overqualified for the task at hand. Within Abacuc’s drawer lay a pair of Raspberry Pi Pico W boards, which had hitherto remained unexplored. It was the perfect opportunity to delve deeper into the capabilities of this microcontroller platform.
The journey commenced with a vision of the clock’s capabilities:
– A display to showcase the current frequency, utilizing a 4-digit, 7-segment display.
Six physical buttons, each with its purpose:
Reset: A means to reset picoclock to its default configuration, with configurations stored in the main.py file.
Cycle: The ability to cycle the first digit from 0 to 9.
Shift: A mechanism to multiply the current sequence by ten, complementing the Cycle button to input frequencies from 0 to 9999.
Set: A button to configure picoclock with the currently displayed frequency and initiate the clock.
Start/Stop: A control to commence and halt the clock, allowing users to freeze the 6502 in its current state. Note: This feature is compatible with modern 6502 models with a fully static design.
Pulse: A button to trigger a transition from low to high and high to low on the clock output pin, facilitating single-stepping of the 6502 microprocessor.
The on-board LED serves as a visual indicator, conveying whether the clock is running.
As the pieces of this intricate puzzle fall into place, a glance at the Pico documentation reveals an intriguing detail – the ability to utilize PWM frequencies up to 127 MHz, a substantial leap beyond the 14 MHz limit imposed by the 6502. From an electronics perspective, this venture is straightforward. The Pico operates at a voltage of 3.3 Volts, well within the acceptable range for the 6502. However, it’s essential to exercise caution, for the 6502’s specifications dictate that it will not run at more than 8 MHz when powered with 3.3 V – a limitation that doesn’t pose a significant obstacle.
For Abacuc, the choice of development environment falls upon MicroPython. The comfort and familiarity of Python provide a solid foundation, even though the realm of engineering beckons with the allure of C. Perhaps another time, another challenge.
As software and hardware converge in elegant simplicity, the project can be up and running in less than an hour. However, a curious discovery awaits in the realm of Pico PWM. The specifications unveil the ability to produce a PWM signal spanning a vast range, from 8 Hz to a staggering 127 MHz. A stark contrast to previous experiences with the Raspberry Pi Model 3B, which called for timers at lower frequencies and PWM for higher or equal frequencies.
In a final note on the choice of Integrated Development Environment (IDE), Abacuc shares a personal preference. While a great admirer of PyCharm, it concedes that Visual Studio Code holds a special place when it comes to working with MicroPython.
And there it is – the “picoclock” – a fusion of hardware and software, where precision and creativity meet. If you’re curious to explore further, the code awaits.
I have entered a rabbit hole with Abacuc. I have spent much of my free time on time in the past few days.
I run the Abacuc 6502 breadboard computer using a Raspberry Pi Model 2B. I also purchased a Raspberry Pi 5 to get some more speed. When the courier delivered the new Pi, I set it up as a perfect copy of the Pi 2.
I use PyCharm as my Python development environment. I can easily set up a remote SSH connection to the Pi, automatically sync my project files, and run the debugger from my Mac if needed. I wish I had these kinds of tools when coding was my job.
The Raspberry Pi 5 does not play very well with RPi.GPIO. You can use it without a problem by running your program as a privileged user. This is not something that I like. It’s a security problem that I should not care about, but I wanted a neat solution.
This is the reason why I switched from RPi.GPIO to gpiozero. No significant differences between them. I should make some tests about speed, but as I previously wrote, Abacuc will be a snail, and speed should be fine here.
I ported my code to gpiozero, and everything worked as expected. I had some issues with the gpiozero on the Rpi 2. From the repository, the installed version is version 1.6.something, while my code was written for gpiozero version 2.0. something. There is a little difficulty in moving to version 2.0 on RPi2, but I made it.
While porting my code to gpiozero, I read the gpiozero documentation and found something exciting.
You can use the library with a Python program on one computer and drive the GPIO pins of a remote computer over TCP/IP thanks to the services of pigpiod on the RPi.
Cool, I can run the code on my Mac without the need to have the code running on the RPi. There may be some speed issues, but that will not be a problem.
I can run the Abacuc code on my Mac. How do I drive the three MCP23017 using the RPi I2C bus? Well, there are I2C remote libraries as well!
This is even better than the plans I had. My original idea was to connect a terminal to Abacuc for input/output. It would have been a console application on my Mac talking to abacuc via TCP/IP. I considered using the original VT220 fonts on this terminal in a classic 40×80 screen resolution. This would have made the terminal really rétro.
I must give this a try!
P.S. I don’t write much about the usual things I talk about these days. Reality is I use coding as an escape from the trenches.
As I wrote in my last post, I am currently spending some time putting together a 6502 breadboard computer. I got the system up and running in a very basic configuration.
I have some debugging output from my Python code. Every time there is a transition of the clock from high to low, I output a debug line that contains the following information:
Status of the RESB, SYNC, RWB, BE, SOB, IRQB, RDY, VPB, and NMIB microprocessor pins.
A binary and hex dump of the addressbus address
A binary and hex dump of the databus
An indication if the microprocessor is trying to read or write based on the status of the RWB pin. (Redundant with what I do but I wanted a clear visual indication)
This gives a good idea of what is happening while executing code. Since I can lower the operating frequency to whatever value I want and single-step, I can look closely at what the microprocessor is doing. (Thank you again for the full static design of the W65C02)
If you read the W65C02 datasheet, you will learn that:
(SYNC) The OpCode fetch cycle of the microprocessor instruction is indicated with SYNC high. The SYNC output is provided to identify those cycles during which the microprocessor is fetching an OpCode. The SYNC line goes high during the clock cycle of an OpCode fetch and stays high for the entire cycle. If the RDY line is pulled low during the clock cycle in which SYNC went high, the processor will stop in its current state and will remain in the state until the RDY line goes high. In this manner, the SYNC signal can be used to control RDY to cause single instruction execution.
This is cool. I can check if the microprocessor fetches an OpCode and disassemble it on the fly, writing the disassembler output to my console.
A few days ago, I stumbled upon a GitHub repository from Bill Zissimopoulos, basically doing the same thing I was trying to do. The only difference was that he used an Arduino Mega instead of a Raspberry Pi. Looking at the Arduino code he wrote, I noticed he had the same idea about using the SYNC pin. The disassembler code was written in C, and I ported it to Python, spending an hour or so in the process.
It worked like a charm! Thank you, Bill, for your code.
What I wrote is quick and dirty, but it does the job.
After I finished with Geremia, I had to pick up another personal project to spend some time soldering and coding. I found out that mixing hardware and software is much more fun than just coding for me.
I am an old guy, and I grew up in the Commodore 64 and ZX Spectrum era of computing. My first computer was a Sinclair ZX 80, shining with its Z80 microprocessor at 3.25 Mhz, 1 Kb of RAM, and a connection to my home TV.
I had a lot of fun and have always been on the Spectrum side of computing.
It was about time to play with the other side of the coin, the 6502 microprocessor.
This is how Abacuc was born—a breadboard computer based on the 65C02 microprocessor. Why Abacuc? It’s a citation from a character that appeared in the “Brancaleone alle crociate” movie. Abacuc was a short man who carried inside a small chest while the armada was traveling to conquer the city of Aurocastro. As I always say, if you want to understand how Italy works, you should look carefully at a few movies. “Brancaleone alle crociate” is one of them. “Un borghese piccolo piccolo”, “Il marchese del Grillo” and “Amici miei” are the other movies.
First steps – Is it feasible?
I looked around for some documentation and reference design. The 6502 was designed by MOS Technology and launched in 1975. Sixteen bits address bits and eight bits data bits with a clock from 1 Mhz to 3 Mhz.
Building a computer with this microprocessor is challenging but not rocket science. Get the microprocessor, connect it to RAM and ROM, put a 6522 Versatile Interface Adapter in place, connect a keyboard and a display, and you are done. Ultimately, this is what Steve Jobs and Steve Wozniak did with the Apple 1. Nevertheless, this is definitely out of my capabilities in terms of electronics. I had to find a different way.
I researched and found that the 6502 microprocessor is still manufactured and sold. I checked online and found a modern version: Western Design Center W65C02. It is available in a 40-pin PDIP package that can easily fit on a breadboard. The most exciting thing was that this new version is a full static core, something the original MOS 6503 was not. This is definitely a big deal to me since I can stop the microprocessor clock at any time without losing the internal state. That’s absolutely cool. I can single-step machine code while running and look around internal data and status without compromising the operations of the microprocessor. It also supports a frequency of up to 14 Mhz.
This would be my next pet project.
I ordered the microprocessor for something less than ten dollars, and while waiting for the delivery, I started reading the Western Design Center W65C02 datasheet. It was a well-written document that gave me a clear idea of how to make it work.
The main issue remains. I don’t know enough about electronics to design a working circuit.
I was sitting in my studio, that sometimes turns into a lab. I was thinking about this when my eyes spotted an old Raspberry Pi Model 2B that I had just recovered from an aging Home Assistant experiment I did in the past before moving it to an Intel Nuc.
Wait! I am good at software, and the RPi has plenty of GPIO ports I can use. It has a decent amount of RAM and runs at 900 MHz.
I could use the RPi to simulate the hardware I could not design. I can read and write the address bus and the data bus, manage interrupts and non-maskable interrupts via software, simulate RAM and ROM, simulate hardware devices, and deal with all the other control signals of the W65C02 via software.
Using a 50-dollar device to run a 10-dollar machine is not the most convenient thing in the world, but it will be a lot of fun.
It could work! (Did you get the citation? Young Frankenstein, 1974)
First problem
I returned to the W65C02 datasheet to check how many GPIOs I needed. Here’s the count:
16 bits for the Address Bus
8 bits for the Data Bus
1 bit for the PWM clock
8 bits for SOB, VPB, RDY, IRQB, NMIB, SYNC, RESB and BE Total: 33 bits. Unfortunately, the Raspberry Pi has only 26 GPIO pins. I had three MCP23017 16-bit I/O Expander breakout boards from a previous project. Using two of them, I could satisfy the project’s needs. The only problem is that the maximum speed of the MCP 23017 is 1.7 Mhz over the I2C interface. The Apple 1 was running at 1 MHz; there is no big issue here. I could also swap the 23017 with the 23S17, allowing a maximum speed of 10 MHz over the SPI interface. I must also consider the maximum speed of the I2C bus, plus the software overhead. Ok, this system is not going to be fast.
Wiring, finally.
The W650C02 has been delivered, and I can start putting some wires on the breadboard. The first design I want to try is to wire the Address Bus, the Data Bus, the RWB pin, and the PHI2 clock. The RDY, IRQB, NMIB, BE, SOB, and RESB will be forced high, connecting them to 3.3v from the RPi. I also want some blinking LEDs to show me the content of the Address Bus and the Data Bus. I will use three 10 LEDs LED bars. The result is a mess of wires, but it should work fine.
Software
Python is the language of choice. I could use the smbus2 library to interface with the microprocessor. C would be much faster, but speed will not be an absolute need for now. Here’s the basic pseudo-code:
Init the GPIO pins Init the MCP pins of MCP1 Bank A and Bank B Init the MCP pins of MCP2 Bank A and Bank B allocate 65536 bytes of memory and init it to 0x00 set location 0XFFFC to 0x00 set location 0XFFFD to 0x80 load program into memory at location 0x8000 reset W65C02 while true: read RWB pin read addressbus clock_rise if (RWB == 1) # Processor wants to READ read_data = memory[addressbus] write read_data to databus else # Processor wants to WRITE databus = read databus memory[addressbus] = databus clock_fall
This turned out in 200 lines of Python code.
Run
As you may have noticed from the pseudo-code, I am not using the RPi PWM clock. I wanted more control over what was happening. Surprisingly, it worked after a few adjustments. This is my first 6502 assembler program that I wrote to test Ababuc:
ROM_START .equ $8000 RESET_VECTOR .equ $fffc .org ROM_START ; Origin address Start: LDX #$00 ; Initialize X register to 0 STX $D000 ; Store the count at memory location 0x0D00 Loop: INX ; Increment X register STX $D000 ; Store the updated count CPX #$01 ; Compare X to 0x01 BCC Loop ; Branch back to Loop if X < 1 JMP Start ; Start over .org RESET_VECTOR .word $8000
I compiled it using the vasm6502_oldstyle assembler by running: vasm6502_oldstyle -wdc02 -Fbin -dotdir count.s -L count.lst -o count.bin
If you run a “hexdump -C” on count.bin, this is what you get:
The program to load into memory starting at 0x800 is then: code = [0xa2, 0x00, 0x8e, 0x00, 0xd0, 0xe8, 0x8e, 0x00, 0xd0, 0xe0, 0x01, 0x90, 0xf8, 0x4c, 0x00, 0x80, 0x00]
Final considerations
It is not yet the case to post the Python code. It is rough and does not do all that is needed. I wanted proof of concept to check I was on the right path. I will make the final design publicly available when it is finished.
For as long as I can remember, photography has been a deep-rooted passion of mine. The notion of capturing moments in time through the lens of a camera has always fascinated me. Back in the day, I even transformed my minuscule bathroom in my modest flat into a makeshift photo lab. I would spend hours there, meticulously developing black and white prints, experimenting with colors, and wrestling with chemicals to achieve the perfect image. Creating a truly dark environment in that cramped space was no easy feat, and the whole process was a labor of love. But now, with the advent of digital photography, things have become significantly more straightforward.
The thought of buying a camera had been on my mind for an eternity. Finally, I took the plunge. I acquired a second-hand Leica X2, a beautiful relic from Germany, which had first hit the market back in 2012. This compact wonder was equipped with a fixed 36mm F2.8 lens and a 16.2MP CMOS sensor, specs that might pale in comparison to the latest gadgets like the iPhone 15, boasting a whopping 48MP resolution. But to me, it was not about keeping up with the latest and greatest; it was about something deeper.
“Why a second-hand camera?” you might ask. Well, for one, I relished the idea of owning a piece of equipment that had a life before it landed in my hands. It felt like adopting a pet from a shelter, giving a forgotten treasure a new lease on life and sparing it from the fate of a landfill. It was my small contribution to the world of sustainability.
Secondly, affordability played a significant role in my choice. Leica had always held a special place in my heart, but its price tag had always remained elusive. While this second-hand camera wasn’t exactly a steal, it was a Leica, and that alone made it worth every penny.
And then, there was the thrill of the hunt. Scouring the internet for the best deal on used items, endlessly researching the perfect compromise between price, condition, and accessories – it was a game that I had missed dearly. Back when eBay was primarily a hub for second-hand goods, the hunt was even more exhilarating.
“But why not use your iPhone?” you might wonder. True, the iPhone boasts a nearly perfect camera, but it’s also part of a larger ecosystem filled with applications, notifications, messages, and emails. Distractions lurk around every corner, ready to pull you away from the pure act of photography. I craved a device that allowed me to focus solely on the art of image-making, to be fully present in the moment. For me, the ideal scenario meant switching off the phone entirely while capturing photographs.
Now, you might be wondering why I opted for a camera with a fixed 35mm lens. Well, I yearned for something small and lightweight, a camera that I could carry with me effortlessly in my backpack every day. My previous camera, a Canon EOS 7D Mark II, was a behemoth with a collection of lenses and filters, weighing as much as a refrigerator.
But more than that, I wanted a tool that would force me to think deeply about composition, framing, and the entire photographic process. If I needed to get closer to a subject, I wanted to physically move closer, not just rely on a zoom lens. I wanted to be fully engaged in the act of photography, to grapple with the limitations of the camera’s settings and behaviors. Point-and-shoot simply didn’t align with my artistic vision.
As I eagerly await the arrival of my Leica X2, which is currently making its way across Europe, I can’t help but wonder if this camera will truly work for me. Will it reignite my old passion for photography? Only time will tell, but one thing’s for certain: I’m itching to start using it and rediscover the world through its lens once again.
Let’s start from the beginning. There will be some repetitions in this post. I have already written about Geremia, and this post summarizes the project itself.
The origins
I am an avid reader and love Marco Vichi as an author. The main character of Vichi’s books is a policeman named Franco Bordelli. In one of the novels, Bordelli receives a skull as a gift from the forensic pathologist he works with. The novel takes place in the 60s, and, at that time, it was not illegal to have a real human skull. Bordelli places the skull on one of the kitchen shelves, and in all subsequent novels, it is common to find him talking to the skull. I found it an exciting idea, but I wanted to twist it. Let’s give some life back to the skull using modern technology.
Sourcing the skull
In 2023 it would be ridiculous to look for a real skull (I’m kidding if you ask yourself if I’m insane.) Amazon could help. I wanted an actual size skull with a moving jaw and enough space in the cranium to hold the electronics I was thinking of. It also had to be realistic. After some research, I bought an anatomic model of a skull (https://www.amazon.it/gp/product/B007S9ZES4/ref=ppx_yo_dt_b_asin_title_o02_s00?ie=UTF8&psc=1) When the skull arrived, it looked perfect. It looked real, had enough space for the electronic circuits, and had a movable jaw.
What should it do?
My idea was to build a notification device connected to my personal computer. The skull would play a choreography each time a notification should arrive. I was thinking about a basic choreography: open and close the jaw, blink two LEDs representing the eyes, and play a notification sentence with a creepy voice.
How could I accomplish that?
As soon as I started thinking about the idea, I realized an Arduino board would be perfect for this project. I could easily connect two LEDs to two ports of the Arduino board to drive the two LEDs, use another port to drive a servo motor and connect an MP3 player to play pre-recorded sentences. I had an Arduino Leonardo and a servo motor I could try using. I put together a breadboard circuit to test the basic idea. In the first place, I wanted to test each segment of the choreography alone. Test the two LEDs, move to the test of the servo motor, and finally, test the MP3 player. The first problems started to reach the surface:
The skull jaw is tied to the cranium with a strong spring. When I tested the servo motor on the breadboard and simulated the spring tension, I quickly noticed that my servo motor was not enough. I had to buy another servo motor with high torque to be sure to be able to move the skull jaw. (https://www.amazon.it/gp/product/B09KZ8VTNB/ref=ppx_yo_dt_b_asin_title_o05_s01?ie=UTF8&psc=1)
The new servo motor introduced a new problem. When the servo motor moved with a load attached to simulate the spring tension, the Arduino board randomly resets itself for no apparent reason. I understood that in electronics, there is never “no apparent reason.” The Arduino board was resetting because the servo motor was drawing too much current from the board that I was powering through the USB port. A quick search on the Arduino board instructed me to use an external power supply with a voltage regulator to transform voltage from 12V DC to 5V DC. Please note that I did not know electronics at all, apart from some basic notions from the past. I just copied a circuit from the internet and tested it on another breadboard with a multimeter.
The Arduino board I used was not powerful enough to stream content from my PC, so I used an MP3 player instead. I found an extremely cheap MP3 player from China. The first component I received was refusing to work, and I could not understand why. I wired it as the manufacturer explained and used some Arduino test code from the manufacturer. No way, it always refused to work. I searched for the answer to the problem on Google and found out that there are non-working clones of these cheap devices, and I just got one of those. I bought an original DFPlayer Mini, and all the problems went away. (https://wiki.dfrobot.com/DFPlayer_Mini_SKU_DFR0299) The picture below will show the very first working breadboard. On the back, you will notice an oscilloscope. During these months, I got intrigued with electronics and hardware hacking. I borrowed the oscilloscope from a friend who was not using it anymore for another project, but that’s another story.
Building the Arduino shield
I planned to build an Arduino shield to snap on top of the Arduino Leonardo. The The shield had to have LED connectors, the servo motor, and the MP3 player speakers. Again, this was the first time I had built something like this. I did not know how to solder, and I did not know anything about board routing. I bought some Arduino proto shield empty boards (https://www.amazon.it/gp/product/B093ZB5M1B/ref=ppx_yo_dt_b_asin_title_o02_s01?ie=UTF8&psc=1) and started the design. I confess I used Affinity Designer to draw the circuits. I know every electronic engineer out there will already be laughing loud. I also gave a try to Kicad, but I failed miserably. The only thing I was able to accomplish was to draw the schematic of the circuit. Again, something I never did before. It may not be correct. Ok, I will make you laugh again at me. Here is a picture of the schematic:
The idea was to draw the schematic on Kicad, design a proto-shield PCB and then get it printed by a service like PCBway or similar. Too much complicated for my brain. I confess I gave up on day two. It took me four different iterations to get a working proto shield, but I was extremely satisfied when I saw that everything was working as expected. Below is a picture of the proto-shield with all the devices connected and ready for testing:
The software
This was something I felt much more comfortable with. The idea was simple. One application running on my PC polling services for potential notifications and communication with Geremia over a serial port connection. A simple machine state to take care of everything. These were the basic requirements:
I wanted to use Python as a programming language. Perfect for a quick and dirty project.
Finally, I wanted a self-contained application that should not run from the command line. I did not wish to have a terminal window on my desktop. The perfect place would be an application sitting in the taskbar. The rumps Python library was up to the task (https://rumps.readthedocs.io/en/latest/). It also offers the option to create a standalone application, which I wanted. The software on Arduino was a different story. As you may know, Arduino is not multi-tasking or multi-thread. This was a huge problem to deal with because I wanted all of the choreography parts to play at the same time. We do not have multi-tasking or multi-threading, but we can simulate it. The protothreads Arduino library solved the problem (http://dunkels.com/adam/pt/). The Arduino application is made of a few threads:
One thread will monitor the serial port and respond to commands from the PC.
The servo motor thread will make the servo motor move when needed.
The left LED and right LED threads will make the LEDs blink.
The MP3 player thread will make the board play a sound when there is a notification.
The DFPlayer thread will monitor the MP3 Player status to change the status to idle when an MP3 file has finished playing. In principle, it looked easy, but it needed a lot of tweaks since the protothreads library imposes many limitations. It took more time than expected. The final result is acceptable even if you have to pay the price regarding fluidity in the choreography.
Assembling
I had the proto-shield and the software ready. It was about time for the brain implant to Geremia. The first issue is insufficient space for the new servo motor to fit in Geremia’s cranium. The original servo motor I planned to use was much smaller than the new one. The only option I found was to place the servo moto on the base holding Geremia. Sounds good, but I need to find a way to keep it in place and ensure the servo motor torque will not move when there is action. I designed it and printed it on my FLSun Q5 3d printer. I never modeled anything in 3d. I installed Fusion 360 on my PC to do it. It is free for personal use. I looked at a few tutorials on YouTube, and finally, I could print the servo motor base and holder. I was surprised by the power of technology. I could create something from scratch and have it on my desk in minutes with a few cheap things. Unbelievable. Here are a few pictures of what I designed and a picture of the finished item.
I spent the following thirty minutes assembling everything in Geremia:
Final result
Here is a video of Geremia playing a notification for new mail.
What’s missing?
I still need to fine-tune the software on Arduino and my PC.
I need to place a heat sink on the LM7805 voltage regulator since running the servo motor tends to become quite hot.
Hide the servo mounting base and bracket.
Final considerations
I started the project thinking it would be simple, but that was different. I had to solve various problems along the journey, which sometimes was difficult. The great thing is that I have learned a lot during the process. Finally, I made it, which is a great thing.
Yesterday I was looking at the Apple WWDC event, where they unveiled the Vision Pro.
During the event, I was thinking:
There is not a single flat surface on the Vision Pro hardware. Everything is curved. I went to sleep with a prayer for the hardware engineers that will have to make the Vision Pro production ready. Best of luck, guys. As a side note, Apple will be unable to produce the headset at a very high volume.
Apple said that they applied for more than 5000 patents for the Vision Pro. That’s an incredibly high amount. Only those companies with the economic power of Apple could sustain such an effort. We can count them on the fingers of one hand.
Tim Cook, and his colleagues, never used the term Metaverse.
The narration of the house where the Vision Pro was pictured left me bittersweet—big houses, high-quality interior design, extreme minimalism, and tidiness. I am looking around at my home right now, and it does look like what Apple was suggesting as the Vision Pro environment.
3.499 US dollars is costly.
No sign of potential interaction between users wearing the Vision Pro. They showed how you would look like a human while interacting with other humans without the Vision Pro, but no idea of how it will behave when two people wear the hardware in the same room.
Tim Cook said: “One more thing!”
No mention of AI. Yeah, he talked about machine learning, but not in the context of interaction with a chat.
We must wait at least six months to see the device in stores. I hope they will try to eliminate the ugly external battery, even if it will be tough.
I feel like it is like a technology showcase.
Thinking about what they showed us, Apple imagine us using the device: on a chair, in front of a flat surface, on a plane seat, and while lying in bed.
While working in Geremia, the talking skull, I needed to find a simple MP3 player that I could drive from the Arduino Leonardo board. An MP3 player was the simplest solution. A device that is easy to operate via a virtual serial port and not expensive.
After some research, I ordered a couple of DFPlayer Mini for 3.07 US dollars each.
Truth said I destroyed both of them in the first two iterations of the Arduino Proto Shield. No big deal, they are not expensive. Since I wanted to progress in the project, I needed a few more devices as soon as possible. I looked on Amazon and found an offer to buy five for 8.4 US dollars. That price would make 1.68 US dollars for each unit.
The courier delivered the package; I installed the unit on my proto shield and started playing with the software. You can use many libraries for this MP3 player with Arduino, and the logic is straightforward.
The device is very cheap, and it has limitations. You will have to name the files on the microSD card carefully, ensure there will be no other files apart from MP3 files, and (!!!!) load the files on the card in the same order of their naming.
After half an hour of tinkering, I could play MP3 files on the microSD card.
From a software standpoint, I needed to ensure that the finite-state machine on Arduino was keeping the state BUSY until the specific MP3 file had finished playing. I wrote another proto-thread to monitor the state of the BUSY pin on the DFPlayer Mini. The pin will be low if the player is busy doing something and high if the player is idle.
No way, it was not working at all. Erratic behavior every single time. I checked the wiring, the soldering, and the power levels, but I had yet to find a way to get through. I also connected the device to a logic analyzer to look at the behavior of that pin. I could not find a solution. I searched the net to see if someone else had the same problems and found a solution.
Someone wrote that there are DFPlayer Mini clones with many issues, some of which were similar to those I was experiencing on my device.
Now, wait! A clone for a five-dollar device?
Yes, that’s the case.
After downloading and installing a DFPlayer mini analyzer sketch for Arduino, I had confirmation that the device I was using was a clone. Many of the functionalities that an original device should have were not working on my device.
I had bought cheap clones of a cheap device.
Lesson learned: there will always be a clone hunting you.
Over the weekend, I made some significant progress with Geremia, my talking skull project. I was able to finish the print and assembly of the servo motor mounting bracket, and to my surprise, it worked perfectly. I attached the bracket to the back of the skull and used a short string to connect the servo arm to the jaw. The movement was smooth and responsive, even with the short lever.
One minor issue I need to address is finding a reliable way to make this connection in the final assembly. However, I’m confident that this is a problem I can solve quickly.
After finishing up with the hardware, I returned to the software side of the project. I have been working with two finite-state machines, with one running on the Arduino Leonardo and the other on my MacBook. The PC software is written in Python and communicates with the Arduino board via a serial connection.
The PC software monitors Geremia’s health at specific intervals and checks for new events to be notified via the talking skull. When there is a new notification, the software sends the notification over the serial port to Geremia, which then takes the appropriate action. This includes flashing the two LEDs, moving the skull jaw, and playing an MP3 file specific to the notification type.
The challenge with this setup is that everything needs to run in parallel to make the action appear natural. Unfortunately, Arduino does not support multitasking or multithreading, given the limited hardware resources available. However, there is a solution available – I imported the protothreads library, which simulates multithreading via software.
Modifying my source code to incorporate protothreads was not a significant challenge, although there are a few things to be careful about. For example, you cannot declare local variables in protothreads, so you need to use global variables or static variables inside the proto-thread. Additionally, it’s essential to avoid synchronous functions inside the proto-thread, like using the delay() function. Instead, you need to use PT_SLEEP(), which is part of protothreads.
I have five proto-threads in my application, which include one for the left LED, one for the right LED, one for the servo motor, one for the MP3 player, and one for the serial communication. It’s exciting to see how Geremia’s brain has evolved to be multitasking, even though it wasn’t originally designed to be.
I still need to perform some more testing on the finite-state machines to ensure everything is working as expected. Nonetheless, I’m thrilled with the progress I’ve made so far and can’t wait to see what else I can accomplish with Geremia.
While I was working at the proto-shield development for Geremia, I found myself getting more and more interested in electronics. At the very same time, I found intriguing the intersection between hardware and security.
It was about time to give it a try.
While trying to sort out the clutter in my drawers, I found an extremely old Netgear WNCE3001 WiFi repeater. It is more than five years old and no longer helpful. What about trying to tear it apart and play with it?
I took it to my desk and started to remove the plastics. That was easy since I had an iFixit kit. I ended up with the motherboard, and a quick inspection revealed that an RTL8196C-GR Realtek microprocessor powered the device and that a nice MX25L3206E CMOS flash was on the other side of the motherboard.
A deeper inspection of the motherboard revealed six pins without any text on the motherboard. Those may have been a JTAG or UART connection.
I picked up my multimeter, powered up the device, and measured the voltage from those pins. I quickly found the ground pin and noticed a varying voltage from another pin. That started to look like a serial interface.
A few months ago, I picked up a second oscilloscope for a little bit more than a hundred bucks and connected one probe to those two pins. After a few seconds of looking at the screen, it was obvious that it was a serial connection.
A few calculations from the scope measurements and I found out it was a 38400 bps serial connection. I connected those two pins to a USB TTL-UART converter and fired up a serial terminal on my Mac. I could quickly look at the boot sequence on my PC screen. A few minutes later, I found the RX pin on the device, and I have a working serial connection between the repeater and my pc.
It was interesting to notice that the boot sequence ended with my terminal in a working shell with root privileges.
I now have a root shell on a Linux box.
I looked around the file system and closely at the boot sequence. In a few minutes, I found my WPA2 key in plain text.
Even if I knew nothing about hardware hacking and electronics, that was extremely easy to find.
I also wanted to explore the CMOS Flash, trying to dump its content. I had a Buspirate lying around and used it to dump the firmware. The SPI protocol is relatively easy to understand, but, in this case, you don’t need to know it since the Flashrom utility deals with it for you.
I could not read the flash chip while it was on the motherboard. I had to desolder it. No big deal; I would trash the device at the end of the experiment.
After the memory dump, I used binwalk to extract the files from the dump file.
No more exciting things to do with this device.
I could not define this device as the most secure one out there.
Maybe the next time I could try to get one of those cheap security cameras and look at what’s in there.
During the last weekend, I made some steps towards the realization of my talking skull, Geremia.
I completed the client ad server application that will drive the skull. A straightforward state machine that will poll different resources on the Internet and that will notify the Arduino board sitting in the skull to perform actions related to the notification (i.e., blink the LEDs that play the role of eyes, play an MP3 file to simulate speech and activate the servo motor to make the skull jaw move.) The server and the client are connected over a simple RS232 interface.
That was a couple of hours of coding.
It was about time to figure out how to install all of this stuff into the skull. There is enough space to host the Arduino board and the proto shield I assembled. Unfortunately, I could not fit the servo motor, no matter what I tried. There needs to be more space, and giving the servo motor a stable installation would be challenging. I used a DM996 servo motor because the spring holding the jaw is pretty strong, and I needed something with high torque. This is also why I had to implement a voltage regulator circuit on the proto shield while dealing with the Arduino 12 Volt external power supply, but this is another story.
My first thought was to abandon the moving jaw feature of Geremia. Thinking about it, I realized that it was the coolest feature of the skull, and I didn’t want to drop it simply because I could not find a viable solution. This is when you feel sad because you can’t resort to your father, that was a remarkable mechanical engineer. We could have had some great fun working on this. Again, this is another story.
I started thinking about a possible solution. After some time, I found that the only solution was to host the servo outside the skull, anchored to the skull base.
I grabbed my Remarkable and started drawing, not like a mechanical engineer but more or less like a first grader. After some time, I thought I had a solution. The servo motor would be visible behind the skull, but I can cover it with something. Geremia would look very cool with a fancy scarf.
I have my design ready, but how can I build it? I found that Fusion 360 is free for personal use. I have never used any 3D modeling software in my entire life. I downloaded it and installed it on my Mac. I grabbed a caliper I bought in China decades ago and started to make measurements on the servo motor and the skull base. I added the relevant quotes to my terrible drawing and was ready to model it on Fusion 360. It was mostly made of boxes with four holes to anchor the servo motor.
I viewed a couple of tutorials on YouTube on how to use Fusion 360, and a couple of hours later, I had my design ready.
I saved the file and opened it with Ultimaker Cura to print it on my FLSun Q5 3D printer. I adjusted a few parameters to strengthen the object and launched the print. Sunday morning, the print was finished. I need a few screws and nuts, and I will be able to assemble them with the servo.
While I was having lunch, I was thinking about what I did, and I was surprised. It is incredible what you can do with technology today. Knowledge is available to everyone with an internet connection. Twenty-four hours before, I didn’t know anything about 3D design, and now I have something I designed sitting on my desk.