Jamie Balfour

Welcome to my personal website.

Find out more about me, my personal projects, reviews, courses and much more here.

Jamie Balfour'sPersonal blog

Jamie Balfour'sPersonal blog

Enough said, really.

Tim Berners Lee, the man considered the father of the web, conceptualised the idea back in 1980, but it wasn't until 1989 (two years before I was born) that he came up with the solution we now know as the web.

Berners Lee suggested combining hypertext and the Internet for document sharing back then. Now, the web has become far more than this and, coupled with the Internet, has become a superhighway of information.

The trend of smartphones replacing functionality that was once carried out by a desktop or laptop computer is continuing to grow. With companies like Samsung and Google at the forefront of this new technology, things like DeX, we are beginning to see a new bring-your-own-device to work scenario growing. A lot of this functionality has come from the latest chips being powerful enough to perform desktop-like functions and give a desktop-like environment without struggling but it's also down to the advancements in cable technology like USB Type C and its Alt-Mode function. 

Let's look at the first point made here: smartphone chips are more powerful than ever, boasting up to 6GB or 8GB of RAM and decent graphics processors. They are more than capable of running Windows 7 based on those specifications, so they are more than capable of running your desktop applications such as Microsoft Word in a smartphone form. Microsoft attempted to do this with their phones using Continuum [1]. This was a fairly wild idea back in 2015 when smartphone processors were nowhere near what they are now. But this kind of thing is now very possible for the average job (we're not talking about playing games on a secondary display with a mouse and keyboard) such as Microsoft Word or PowerPoint. I can see this being very useful for me if I was doing another talk where I could simply dock my phone rather than my laptop. Apple has now made this possible on iPad with the new Stage Manager feature which is very impressive - this kind of thing should come to iPhone when docked.

Now let's look at the second point made here, connectivity. Connectivity is perhaps the main reason this kind of stuff is becoming very possible. Looking at what Microsoft was trying to achieve in 2015 with micro USB (which by the way, was USB 2.0) and USB Type C (later on, however). This is a great idea, one that I fully support. Samsung's DeX feature is actually so seamless that a company my brother was working at during a contract with some 200+ employees in the part he was working at had moved to bring-your-own-device and they would dock it into their USB Type C docks to connect the corporate network and gain a mouse, keyboard and display. Not only does this save the company money, but it also reduces the number of devices the users actually need, adds functionality such as making phone calls from their desktop system and perhaps the most interesting one is that it can actually be used to reduce the amount of time users spend on non-work related apps on their phones (users can enable a work mode when the phone is docked, allowing them to only use work-related apps).

Now some say that the future of smartphones taking over laptops is well away [2], and sure I can definitely agree, but when I heard a whole 200+ people were using their phones using Samsung DeX as their sole working device, I was absolutely astonished. Sure where I work we've done away with desktops, your laptop simply connects to the keyboard, mouse, 2 displays and projector by docking with USB Type C, but it's not quite the same as replacing a laptop with a smartphone or even a tablet for that matter. 

[1] https://www.theverge.com/2015/4/29/8513519/microsoft-windows-10-continuum-for-phones

[2] https://www.teamtreysta.com/will-smartphones-take-over-laptop-functions-in-the-future/

Chip

Moore's Law was kind of like the golden rule for computer systems. Developed by Intel cofounder Gordon Moore, the rule has been the centrepiece and fundamental principle behind the ever-improving computer systems we have today. It basically spelt out the future in that computers will get better performance year after year (doubling in performance, roughly).

It did this by not increasing the package size of the CPUs but by reducing the feature size. Take my EliteBook for example. It has a Ryzen 6850U processor, built with a 7-nanometer process. This 7-nanometer process is called fabrication, and it roughly means that the size of each transistor within a package (a CPU die) is around that size. The smaller and smaller the feature size the more and more likely it is for something to fail or go wrong during production, making it more and more difficult to manufacture CPUs as the feature sizes get smaller. Not only that, it has always been said that we will be unable to get the feature size smaller than the size of an atom (which is approximately 0.1 nanometers in size). The theoretical boundary for the smallest feature size that we can manufacture is approximately 0.5 nanometers - that's not that far away from where we are at 4 nanometers in 2023.

Over the last few years, manufacturers have been trying to squeeze every bit of performance increase out of the latest chips. Apple has been increasing the physical size of the processors by adding several chips together (think of the M1 Pro and Max which are simply M1 chips stuck together with a little bit of magic). This makes the processor very large and unsuitable for smaller devices. AMD, on the other hand, has moved to a design where wastage is less frequent thus increasing the yield of good processors and in turn allowing them to cram more performance into their dies. 

To combat such an issue, the manufacturers of some CPUs moved to a chiplet design. This chiplet design is basically where several components such as the memory controller, IO controllers such as USB and so on compared with the original monolithic architecture used in the past. The first time I experienced a chiplet-based CPU was the Intel Core 2 Quad, where it was literally two Core 2 Duo dies on one chip. The issue is that it takes more space than building a chip using a monolithic architecture where the entire processor resides within one die. There are also other complications such as communication internally between these dies, but they can be easily overcome once it's been done with one chip, as the designs can continue to be used in other chips. There are also power consumption concerns when two CPU dies are put on the same chip, but with chiplets for IO  and memory, this actually reduces the distance between the CPU and the chip, thus actually reducing power consumption and increasing performance. Chiplet design also keeps costs down as it reduces binning a good processor when a section of it fails. For example, if all IO was built into the actual CPU die (as is the case in a monolithic architecture) and only one part of the IO section failed, the whole processing unit would be binned. With a chiplet design, if there is a failure in the IO section, it happens only in that chiplet - a much less costly fix as all it would require is a replacement IO die.

Another way manufacturers are continuing to squeeze performance out of these chips is by embedding algorithms in them. This has always been a feature of CPUs since the Pentium MMX which brought several instruction sets to improve the multimedia capabilities of computers back then. It basically means that instead of a programmer writing an algorithm say to do the vector encoding using something like AVX-512, the CPU does not need the user to write the program. The program is actually an instruction built into the CPU and therefore it runs much faster. This is called hardware encoding. You'll see that Apple has done a lot of this in their M1 and M2 chips over the years to give them an even better performance result than the Intel CPUs they replaced. By doing this, Apple has managed to improve the effectiveness of its software through the use of hardware. This is actually something that could become a problem, however, and it might damage the cross-compatibility of software that can run on any operating system and any platform. I say this because if a piece of software is developed to use AVX-512 it's very likely that it will work on a system without AVX-512 instructions. But with an instruction being built-in such as one which utilises libraries specific to a GPU or CPU feature, cross-compatibility may not be possible without writing massive amounts of additional code (for example DirectX or Metal on macOS both cause issues when porting).

I have always believed since I was a teen that the second method is indeed the way forward, but it really would only work in a single CPU/GPU architecture world, a bit like if everyone was using x86. That's never going to happen, but perhaps if we had a library (Vulcan) that could abstract over those underlying APIs or hardware and make things simpler for developers, then maybe this is actually the best option.

Posted by jamiebalfour04 in Tech talk
intel
gordon moore
apple
chiplet
monolithic
vulkan
api
hardware
cpu
encoding
instructions
moore's law

Let's be honest here, Microsoft has been trying to ditch Control Panel since Windows 8 back in 2012, and yet here, all the way in Windows 11, 10 years later, Control Panel still resides in the core of the operating system. It's uses may have slowly begun to disappear, yet still there are so thing you can only do within Control Panel. 

Windows 11 is still a mess with it's settings being spread across the whole system.

For example, to change the mouse double click speed, one must go to Control Panel and then the Mouse setting. There's other weird things in here such as the Work Folders feature, the Sync Centre (which should be a separate app in my opinion), AutoPlay, the Windows Mobility Centre (why does this still even exist?!), Windows Defender Firewall. But the worst of all, is the Programs and Features. Not knowing how to uninstall an app will confuse some people. The Mail feature is another weird one as well as Phone and Modem (that should have been removed from Control Panel a long time ago).

So why do these things still exist? Does Microsoft not care about making the OS perfect anymore?

We all know how infuriating it can be when a computerjust stops working, and repairability is the key to making the world more sustainable. But SoCs in general have made some parts of this easier than before but have also brought many other issues with them.

First of all, SoCs or System-On-Chips are replacements for the architecture that existed for generations where the CPU, GPU, northbridges and south bridges and the main memory are in completely separate parts of the system.

Back in the days of the Core 2 Duo and Phenom and Athlon days we had IGPs (Integrated Graphics Processors) that communicated through the northbridge which meant it was a long way a way from the central processing unit and were therefore slowed by long path that they had to take to perform operations. We also had two bridges; the northbridge (or memory hub) which communicated with the main memory (RAM and ROM) directly from the CPU, and in the case of an IGP, the graphics processor. This was removed first by AMD and then by Intel (you'll actually notice that whilst AMD has always been the underdog in the CPU market, it actually brought some of the best innovations to the market such as x86-64 and HyperTransport).

But we still had the southbridge for a very long time and it continued to provide backwards compatibility with older hardware such as PS/2, RS232 and other obsolete connectors using the SuperIO hub. Lately however, the SuperIO hub doesn't really need to exist and the whole southbridge has been integrated as a chiplet into the main chip (or SoC). Not only does this improve performance, but it reduces power consumption and heat.

The separate chiplet idea is also very feasible compared with integrating everything into the central processing unit or a dedicated external chip as you'll understand if you understand the term binning chips. 

SoCs have problems though

As an advocate for the concept of a SoC architecture over traditional architectures I can see the humongous benefits that they bring, and the bridge the gap that existed before whereby the performance of a computer was affected by how long the wires between different components was, but they do bring one caveat. 

That one caveat is the fact that all SoCs in laptop computers are soldered, often using the BGA-style of socket. This means that the whole board needs replacing when the the SoC has one faulty part, and, with more being integrated into the SoC, this is more likely to happen. This means that not only is it far more expensive to replace a SoC, but it also means that it is far more difficult. 

I've replaced many CPUs in laptops in my teenage years onwards, most notably is replacing my Pentium 4 laptop with a Pentium 4 Mobile-M chip but I would be hard-pressed to try and replace a BGA chip these days. 

How they need to be improved, particularly in laptops

As the Right-To-Repair movement progresses even further, one of the main areas people should be looking to push for is PGA-based or LGA-based sockets that allow direct replacement of the SoC again. Heck, even the Pentium-M CPUs with their Socket 479 sockets were replaceable to the point of remove and slot in. In a world where slimness is the most important thing, we really need to think about sustainability too.

I finally decided to jump away from big towering desktop computers to a more reasonable laptop-based eGPU setup. 

In the next few months, I will begin dismantling my desktop setup which I have had since January 2019. The desktop I currently have has parts that have survived 5 generations of desktop PCs, and some parts in it are as old as 2007. The PSU in it replaced my 10-year-old Corsair HX850 which lasted exactly ten years from when I built my first PC in 2009. It marked the point when I was considering ditching the desktop PC altogether and going down the USB-C-based laptop route which I tried with my Razer Blade Stealth until the screen started to fail.

I have been torn between two laptops that both comply with my environmental concerns and my Right-To-Repair belief. They are the Framework laptop and the HP EliteBook 845 G9. I have mixed feelings about both, but I am more swayed towards the Ryzen processor than an Intel that offers little performance improvements over my brother's Skylake-based computer.

The Collatz conjecture, named after Lothar Collatz who introduced the idea is a perfect example of a mathematical problem that cannot be solved by mathematics and the tools available within the domain and scope of mathematics of today.

It goes a little like this:

$${\displaystyle f(n)={\begin{cases}{\frac {n}{2}}&{\text{if }}n\equiv 0{\pmod {2}}\\[4px]3n+1&{\text{if }}n\equiv 1{\pmod {2}}.\end{cases}}}$$

it's a nice little conjecture to turn into a program, so I decided to write it in YASS to try and demonstrate this based on the above syntax:

YASS
function f ($n)

  if($n % 2 == 0)
    $n = $n / 2
  else
    $n = ($n * 3) + 1
  end if

  return $n

end function

$n = input("Please insert a start number")

$iterations = 0
print($n)
while($n != 0 and $iterations < 5)
  
  $n = f($n)

  print($n)
  if($n == 1)
    $iterations++
  end if
end while

It's hard to believe, but since just under a month ago I have been over 30 years of age. But so too has the world wide web.

I was discussing this yesterday with someone and discussing how the w3 has evolved from being a simple document sharing system into something that allows you to build and use powerful applications within it. I say this because on Monday I went to one of my favourite restaurants and they use a web app to manage everything, and I say everything. It's very impressive.

Anyway, take a look at How To Geek for more information on this:

https://www.howtogeek.com/744795/the-first-website-how-the-web-looked-30-years-ago/

Posted by jamiebalfour04 in Tech talk
www
w3
changes
history

MXM or Mobile PCI Express Module is one of the most interesting standards that I actually found out about in 2005 when attempting to fix a laptop for a friend (Calum). His Fujitsu Amilo 3438 featured a removable GPU - something I had never seen. When I got asked to try to repair it I was honestly astonished at how easy it would be to do this. 

Unfortunately, it didn't end as nicely as it should have and it seemed that the laptop was beyond repair. I am happy that I got to have a chance at this though because it brought my attention to the MXM standard. 

By observing that MXM was a standard and one that was a very good idea in practice, I have for a very long time been a fan of it.

As someone who believes strongly in the Right to Repair, MXM may have a big part to play in the future, but what do the companies behind the development of systems that could incorporate MXM actually think?

What is MXM?

MXM is one of those really brilliant ideas that is unfortunately brought down by the manufacturers of computer systems. It is no doubt more expensive to follow the route of adding in replaceable graphics cards to a system compared with permanently soldering them, but it's also less profitable in the long run and this is my main concern.

The MXM standard is designed to provide owners with the ability to upgrade their systems at a later date or replace parts when they stop working. But the problem with this idea is, at least in the eyes of the manufacturer, that the customers will stick with the same computer system for longer rather than upgrading it regularly. 

Another major issue with MXM is that it takes more room than a soldered GPU and therefore doesn't allow for incredibly thin designs of laptops (like the MacBooks where Apple sacrifices everything to get thinner and thinner computers). 

MXM cards being a detachable component in the system are also more likely to fail due to connector failure. This is far less likely in soldered GPUs.

But even with all of these problems, MXM still eliminates one major concern that should be more prominent now than ever - the environmental impact. It concerns me that we have become very wasteful with computers with soldered memory and storage drives (like my old MacBook Pro and now my current MacBook Pro). Soldered GPUs basically mean that when the GPU decides to pack it in the whole system stops working. I've had this on numerous computers. MXM allows us to replace a broken GPU or upgrade an old one, bringing a new lease of life into the computer. From a purely environmental point of view, this would be amazing. 

With these new laws being passed, surely the time is right for MXM to take to the centre stage? 

Powered by DASH 2.0