To use this website fully, you first need to accept the use of cookies. By agreeing to the use of cookies you consent to the use of functional cookies. For more information read this page.

Jamie Balfour'sPersonal blog

Chip

Moore's Law was kind of like the golden rule for computer systems. Developed by Intel cofounder Gordon Moore, the rule has been the centrepiece and fundamental principle behind the ever-improving computer systems we have today. It basically spelt out the future in that computers will get better performance year after year (doubling in performance, roughly).

It did this by not increasing the package size of the CPUs but by reducing the feature size. Take my EliteBook for example. It has a Ryzen 6850U processor, built with a 7-nanometer process. This 7-nanometer process is called fabrication, and it roughly means that the size of each transistor within a package (a CPU die) is around that size. The smaller and smaller the feature size the more and more likely it is for something to fail or go wrong during production, making it more and more difficult to manufacture CPUs as the feature sizes get smaller. Not only that, it has always been said that we will be unable to get the feature size smaller than the size of an atom (which is approximately 0.1 nanometers in size). The theoretical boundary for the smallest feature size that we can manufacture is approximately 0.5 nanometers - that's not that far away from where we are at 4 nanometers in 2023.

Over the last few years, manufacturers have been trying to squeeze every bit of performance increase out of the latest chips. Apple has been increasing the physical size of the processors by adding several chips together (think of the M1 Pro and Max which are simply M1 chips stuck together with a little bit of magic). This makes the processor very large and unsuitable for smaller devices. AMD, on the other hand, has moved to a design where wastage is less frequent thus increasing the yield of good processors and in turn allowing them to cram more performance into their dies. 

To combat such an issue, the manufacturers of some CPUs moved to a chiplet design. This chiplet design is basically where several components such as the memory controller, IO controllers such as USB and so on compared with the original monolithic architecture used in the past. The first time I experienced a chiplet-based CPU was the Intel Core 2 Quad, where it was literally two Core 2 Duo dies on one chip. The issue is that it takes more space than building a chip using a monolithic architecture where the entire processor resides within one die. There are also other complications such as communication internally between these dies, but they can be easily overcome once it's been done with one chip, as the designs can continue to be used in other chips. There are also power consumption concerns when two CPU dies are put on the same chip, but with chiplets for IO  and memory, this actually reduces the distance between the CPU and the chip, thus actually reducing power consumption and increasing performance. Chiplet design also keeps costs down as it reduces binning a good processor when a section of it fails. For example, if all IO was built into the actual CPU die (as is the case in a monolithic architecture) and only one part of the IO section failed, the whole processing unit would be binned. With a chiplet design, if there is a failure in the IO section, it happens only in that chiplet - a much less costly fix as all it would require is a replacement IO die.

Another way manufacturers are continuing to squeeze performance out of these chips is by embedding algorithms in them. This has always been a feature of CPUs since the Pentium MMX which brought several instruction sets to improve the multimedia capabilities of computers back then. It basically means that instead of a programmer writing an algorithm say to do the vector encoding using something like AVX-512, the CPU does not need the user to write the program. The program is actually an instruction built into the CPU and therefore it runs much faster. This is called hardware encoding. You'll see that Apple has done a lot of this in their M1 and M2 chips over the years to give them an even better performance result than the Intel CPUs they replaced. By doing this, Apple has managed to improve the effectiveness of its software through the use of hardware. This is actually something that could become a problem, however, and it might damage the cross-compatibility of software that can run on any operating system and any platform. I say this because if a piece of software is developed to use AVX-512 it's very likely that it will work on a system without AVX-512 instructions. But with an instruction being built-in such as one which utilises libraries specific to a GPU or CPU feature, cross-compatibility may not be possible without writing massive amounts of additional code (for example DirectX or Metal on macOS both cause issues when porting).

I have always believed since I was a teen that the second method is indeed the way forward, but it really would only work in a single CPU/GPU architecture world, a bit like if everyone was using x86. That's never going to happen, but perhaps if we had a library (Vulcan) that could abstract over those underlying APIs or hardware and make things simpler for developers, then maybe this is actually the best option.

Posted by jamiebalfour04 in Tech talk
intel
gordon moore
apple
chiplet
monolithic
vulkan
api
hardware
cpu
encoding
instructions
moore's law

Let's be honest here, Microsoft has been trying to ditch Control Panel since Windows 8 back in 2012, and yet here, all the way in Windows 11, 10 years later, Control Panel still resides in the core of the operating system. It's uses may have slowly begun to disappear, yet still there are so thing you can only do within Control Panel. 

Windows 11 is still a mess with it's settings being spread across the whole system.

For example, to change the mouse double click speed, one must go to Control Panel and then the Mouse setting. There's other weird things in here such as the Work Folders feature, the Sync Centre (which should be a separate app in my opinion), AutoPlay, the Windows Mobility Centre (why does this still even exist?!), Windows Defender Firewall. But the worst of all, is the Programs and Features. Not knowing how to uninstall an app will confuse some people. The Mail feature is another weird one as well as Phone and Modem (that should have been removed from Control Panel a long time ago).

So why do these things still exist? Does Microsoft not care about making the OS perfect anymore?

We all know how infuriating it can be when a computerjust stops working, and repairability is the key to making the world more sustainable. But SoCs in general have made some parts of this easier than before but have also brought many other issues with them.

First of all, SoCs or System-On-Chips are replacements for the architecture that existed for generations where the CPU, GPU, northbridges and south bridges and the main memory are in completely separate parts of the system.

Back in the days of the Core 2 Duo and Phenom and Athlon days we had IGPs (Integrated Graphics Processors) that communicated through the northbridge which meant it was a long way a way from the central processing unit and were therefore slowed by long path that they had to take to perform operations. We also had two bridges; the northbridge (or memory hub) which communicated with the main memory (RAM and ROM) directly from the CPU, and in the case of an IGP, the graphics processor. This was removed first by AMD and then by Intel (you'll actually notice that whilst AMD has always been the underdog in the CPU market, it actually brought some of the best innovations to the market such as x86-64 and HyperTransport).

But we still had the southbridge for a very long time and it continued to provide backwards compatibility with older hardware such as PS/2, RS232 and other obsolete connectors using the SuperIO hub. Lately however, the SuperIO hub doesn't really need to exist and the whole southbridge has been integrated as a chiplet into the main chip (or SoC). Not only does this improve performance, but it reduces power consumption and heat.

The separate chiplet idea is also very feasible compared with integrating everything into the central processing unit or a dedicated external chip as you'll understand if you understand the term binning chips. 

SoCs have problems though

As an advocate for the concept of a SoC architecture over traditional architectures I can see the humongous benefits that they bring, and the bridge the gap that existed before whereby the performance of a computer was affected by how long the wires between different components was, but they do bring one caveat. 

That one caveat is the fact that all SoCs in laptop computers are soldered, often using the BGA-style of socket. This means that the whole board needs replacing when the the SoC has one faulty part, and, with more being integrated into the SoC, this is more likely to happen. This means that not only is it far more expensive to replace a SoC, but it also means that it is far more difficult. 

I've replaced many CPUs in laptops in my teenage years onwards, most notably is replacing my Pentium 4 laptop with a Pentium 4 Mobile-M chip but I would be hard-pressed to try and replace a BGA chip these days. 

How they need to be improved, particularly in laptops

As the Right-To-Repair movement progresses even further, one of the main areas people should be looking to push for is PGA-based or LGA-based sockets that allow direct replacement of the SoC again. Heck, even the Pentium-M CPUs with their Socket 479 sockets were replaceable to the point of remove and slot in. In a world where slimness is the most important thing, we really need to think about sustainability too.

I finally decided to jump away from big towering desktop computers to a more reasonable laptop-based eGPU setup. 

In the next few months, I will begin dismantling my desktop setup which I have had since January 2019. The desktop I currently have has parts that have survived 5 generations of desktop PCs, and some parts in it are as old as 2007. The PSU in it replaced my 10-year-old Corsair HX850 which lasted exactly ten years from when I built my first PC in 2009. It marked the point when I was considering ditching the desktop PC altogether and going down the USB-C-based laptop route which I tried with my Razer Blade Stealth until the screen started to fail.

I have been torn between two laptops that both comply with my environmental concerns and my Right-To-Repair belief. They are the Framework laptop and the HP EliteBook 845 G9. I have mixed feelings about both, but I am more swayed towards the Ryzen processor than an Intel that offers little performance improvements over my brother's Skylake-based computer.

The Collatz conjecture, named after Lothar Collatz who introduced the idea is a perfect example of a mathematical problem that cannot be solved by mathematics and the tools available within the domain and scope of mathematics of today.

It goes a little like this:

$${\displaystyle f(n)={\begin{cases}{\frac {n}{2}}&{\text{if }}n\equiv 0{\pmod {2}}\\[4px]3n+1&{\text{if }}n\equiv 1{\pmod {2}}.\end{cases}}}$$

it's a nice little conjecture to turn into a program, so I decided to write it in YASS to try and demonstrate this based on the above syntax:

YASS
function f ($n)

  if($n % 2 == 0)
    $n = $n / 2
  else
    $n = ($n * 3) + 1
  end if

  return $n

end function

$n = input("Please insert a start number")

$iterations = 0
print($n)
while($n != 0 and $iterations < 5)
  
  $n = f($n)

  print($n)
  if($n == 1)
    $iterations++
  end if
end while

It's hard to believe, but since just under a month ago I have been over 30 years of age. But so too has the world wide web.

I was discussing this yesterday with someone and discussing how the w3 has evolved from being a simple document sharing system into something that allows you to build and use powerful applications within it. I say this because on Monday I went to one of my favourite restaurants and they use a web app to manage everything, and I say everything. It's very impressive.

Anyway, take a look at How To Geek for more information on this:

https://www.howtogeek.com/744795/the-first-website-how-the-web-looked-30-years-ago/

Posted by jamiebalfour04 in Tech talk
www
w3
changes
history

MXM or Mobile PCI Express Module is one of the most interesting standards that I actually found out about in 2005 when attempting to fix a laptop for a friend (Calum). His Fujitsu Amilo 3438 featured a removable GPU - something I had never seen. When I got asked to try to repair it I was honestly astonished at how easy it would be to do this. 

Unfortunately, it didn't end as nicely as it should have and it seemed that the laptop was beyond repair. I am happy that I got to have a chance at this though because it brought my attention to the MXM standard. 

By observing that MXM was a standard and one that was a very good idea in practice, I have for a very long time been a fan of it.

As someone who believes strongly in the Right to Repair, MXM may have a big part to play in the future, but what do the companies behind the development of systems that could incorporate MXM actually think?

What is MXM?

MXM is one of those really brilliant ideas that is unfortunately brought down by the manufacturers of computer systems. It is no doubt more expensive to follow the route of adding in replaceable graphics cards to a system compared with permanently soldering them, but it's also less profitable in the long run and this is my main concern.

The MXM standard is designed to provide owners with the ability to upgrade their systems at a later date or replace parts when they stop working. But the problem with this idea is, at least in the eyes of the manufacturer, that the customers will stick with the same computer system for longer rather than upgrading it regularly. 

Another major issue with MXM is that it takes more room than a soldered GPU and therefore doesn't allow for incredibly thin designs of laptops (like the MacBooks where Apple sacrifices everything to get thinner and thinner computers). 

MXM cards being a detachable component in the system are also more likely to fail due to connector failure. This is far less likely in soldered GPUs.

But even with all of these problems, MXM still eliminates one major concern that should be more prominent now than ever - the environmental impact. It concerns me that we have become very wasteful with computers with soldered memory and storage drives (like my old MacBook Pro and now my current MacBook Pro). Soldered GPUs basically mean that when the GPU decides to pack it in the whole system stops working. I've had this on numerous computers. MXM allows us to replace a broken GPU or upgrade an old one, bringing a new lease of life into the computer. From a purely environmental point of view, this would be amazing. 

With these new laws being passed, surely the time is right for MXM to take to the centre stage? 

Posted by jamiebalfour04 in Tech talk
mxm
gpu

As someone who loves computer hardware, I found this article a very interesting reminder of the benefits of RISC over CISC and the benefits of CISC over RISC architectures.

https://medium.com/swlh/what-does-risc-and-cisc-mean-in-2020-7b4d42c9a9de

Posted by jamiebalfour04 in Tech talk
risc
cisc
processors

Hello folks, I hope everyone is okay and not as fed up as I am with everything going on in the world at present!

Today I was thinking about something that I use every day - switchable graphics. See in my MacBook Pro I have an Nvidia 750GT - a decent dedicated graphics card that can run some games well but has never been used for gaming. In fact, my dedicated card only gets used for graphic and video editing on my MacBook and therefore remains cool and quiet using the Intel Iris Pro graphics. The fact that my computer automatically switches without me even noticing (other than the notification I have setup) is quite truly amazing.

So, what about switchable memory? Have a low power implementation such as LPDDR3 and then high-performance DDR4 as the fast memory. It wouldn't be a particularly good idea in smaller laptops but in a laptop such as 15" machine such as my MacBook Pro it might be an excellent idea. Something like this could help a device such as the Nintendo Switch, a lower power RAM for on the move then a powerful implementation that would be activated when it is docked.

Of course, there are issues with this concept of switchable memory. The main one that comes to mind is how do you keep them in sync? If one memory is to be turned off, you need a fast bus or lane to transfer the data from one memory module or type to the other. This could also end up using a lot of power. 

This is a just a thought...

Powered by DASH 2.0 (beta)
Code previewClose