Jamie Balfour

Welcome to my personal website.

Find out more about me, my personal projects, reviews, courses and much more here.

Jamie Balfour'sPersonal blog

Jamie Balfour'sPersonal blog
Chip

Moore's Law was kind of like the golden rule for computer systems. Developed by Intel cofounder Gordon Moore, the rule has been the centrepiece and fundamental principle behind the ever-improving computer systems we have today. It basically spelt out the future in that computers will get better performance year after year (doubling in performance, roughly).

It did this by not increasing the package size of the CPUs but by reducing the feature size. Take my EliteBook for example. It has a Ryzen 6850U processor, built with a 7-nanometer process. This 7-nanometer process is called fabrication, and it roughly means that the size of each transistor within a package (a CPU die) is around that size. The smaller and smaller the feature size the more and more likely it is for something to fail or go wrong during production, making it more and more difficult to manufacture CPUs as the feature sizes get smaller. Not only that, it has always been said that we will be unable to get the feature size smaller than the size of an atom (which is approximately 0.1 nanometers in size). The theoretical boundary for the smallest feature size that we can manufacture is approximately 0.5 nanometers - that's not that far away from where we are at 4 nanometers in 2023.

Over the last few years, manufacturers have been trying to squeeze every bit of performance increase out of the latest chips. Apple has been increasing the physical size of the processors by adding several chips together (think of the M1 Pro and Max which are simply M1 chips stuck together with a little bit of magic). This makes the processor very large and unsuitable for smaller devices. AMD, on the other hand, has moved to a design where wastage is less frequent thus increasing the yield of good processors and in turn allowing them to cram more performance into their dies. 

To combat such an issue, the manufacturers of some CPUs moved to a chiplet design. This chiplet design is basically where several components such as the memory controller, IO controllers such as USB and so on compared with the original monolithic architecture used in the past. The first time I experienced a chiplet-based CPU was the Intel Core 2 Quad, where it was literally two Core 2 Duo dies on one chip. The issue is that it takes more space than building a chip using a monolithic architecture where the entire processor resides within one die. There are also other complications such as communication internally between these dies, but they can be easily overcome once it's been done with one chip, as the designs can continue to be used in other chips. There are also power consumption concerns when two CPU dies are put on the same chip, but with chiplets for IO  and memory, this actually reduces the distance between the CPU and the chip, thus actually reducing power consumption and increasing performance. Chiplet design also keeps costs down as it reduces binning a good processor when a section of it fails. For example, if all IO was built into the actual CPU die (as is the case in a monolithic architecture) and only one part of the IO section failed, the whole processing unit would be binned. With a chiplet design, if there is a failure in the IO section, it happens only in that chiplet - a much less costly fix as all it would require is a replacement IO die.

Another way manufacturers are continuing to squeeze performance out of these chips is by embedding algorithms in them. This has always been a feature of CPUs since the Pentium MMX which brought several instruction sets to improve the multimedia capabilities of computers back then. It basically means that instead of a programmer writing an algorithm say to do the vector encoding using something like AVX-512, the CPU does not need the user to write the program. The program is actually an instruction built into the CPU and therefore it runs much faster. This is called hardware encoding. You'll see that Apple has done a lot of this in their M1 and M2 chips over the years to give them an even better performance result than the Intel CPUs they replaced. By doing this, Apple has managed to improve the effectiveness of its software through the use of hardware. This is actually something that could become a problem, however, and it might damage the cross-compatibility of software that can run on any operating system and any platform. I say this because if a piece of software is developed to use AVX-512 it's very likely that it will work on a system without AVX-512 instructions. But with an instruction being built-in such as one which utilises libraries specific to a GPU or CPU feature, cross-compatibility may not be possible without writing massive amounts of additional code (for example DirectX or Metal on macOS both cause issues when porting).

I have always believed since I was a teen that the second method is indeed the way forward, but it really would only work in a single CPU/GPU architecture world, a bit like if everyone was using x86. That's never going to happen, but perhaps if we had a library (Vulcan) that could abstract over those underlying APIs or hardware and make things simpler for developers, then maybe this is actually the best option.

Posted by jamiebalfour04 in Tech talk
intel
gordon moore
apple
chiplet
monolithic
vulkan
api
hardware
cpu
encoding
instructions
moore's law

After two release dates were cancelled, I'm finally able to release ZPE 1.11.4, aka OmegaZ, which will feature all of the previously discussed features such as union types and inline iteration etc. but will also include a major update to the record data type.

Previously, records looked like this:

YASS
record structure person { string forename = "" }

Now, with ZPE 1.11.4, they've been changed and the structure keyword is now optional. Speaking of syntactic sugar, there is now an optional is keyword:

YASS
record person { string forename = "" }
record person is { string forename = "" }

As well as this, record instances use the new keyword.

YASS
$x = new person()

And fields of the record are accessed using dot notation:

YASS
$x.forename = "John"
print($x.forename)

Whilst the title of this post is somewhat humourous, the topic of this post is far from that. 

Recently several prominent YouTube channels such as Paul Hibbert and Linus Tech Tips have experienced account shutdowns and experiencing hacking of their channels. 

Both channels experienced similar situations with the theft of a cookie being all that was needed to get into the website. And it makes sense too. In the past, I have used session IDs to switch between computers whilst keeping my session the same. So really all that is needed to get into a website without needing to authenticate is that cookie. Ultimately this is why I don't allow websites to do this on my server. However, it does still leave security issues with other websites.

What actually happens in these attacks is basically the user logs into the website as normal, and a cookie is transferred to the user's computer. The cookie is sent back to the web server each time the client requests something, identifying who they are. This session is stored on the server with the ID as defined in the cookie and contains information about who they are - it's fairly simple. But if a hacker obtains this ID, they can put it into their own browsers and they too can pretend to be logged in as the user.

As I have a control tablet at the entrance to my house (and plan to add one to my bedroom) my smart home needs to be constantly improved to make it functional and useful.

One of the latest things that I have managed to do is add information about bin collection to my tablet (not the easiest thing to do since my council doesn't use JSON data for this so instead I needed to use XPath to read the HTML of the page and find the data manually, translate it to JSON and read it using my Home Assistant installation). Currently, all that happens with this is it displays a warning at the top of the tablet as to what bins (if any) are to be put out that day.

Anyway, the following is a bunch of screenshots from my improved smart home.

Finally, union types are accepted in ZPE 1.11.4. The March release known as OmegaZ will now include union return types:

function doubleNum(integer $x) : integer | real
  return $x * 2
end function
print(doubleNum(2))

It will also soon support union parameter and variable declarations which are expected in OmegaY also. This further expansion of ZPE's TYPO nudges TYPO to version 2.1. All of this new stuff has been added with almost no performance penalty at all. 

It sounds like the start of a film title, Zigbee, NUCs and Wifi, but it's not. Simply put, this blog post is a short reminder to those who use the Zigbee protocol and Intel NUCs. The issue relates to the interference of signals which leads to not only a weaker and slower signal but also higher power consumption. 

I recently moved my NUC from my server cupboard to my office where it will remain. There were three reasons for this; on occasion, the server needs to be rebuilt or given a new SSD and as a result, I need the server to have a display. I usually route the HDMI connector through a TX cable back up to the office but this takes time to set up. I could use VNC or something like that to do this, but again, it takes time. So to make it simpler I have moved it up to my own office. The second reason is that it gets quite warm in my cupboard so I was hoping moving it away would keep it a bit cooler. The final reason is the simplistic approach of my cupboard now - it only houses the main network of the house (a small 8 port PoE switch that also powers devices near it such as the Ring hub and provides connections to my other networks around the house, this is my only 2.5GbE network at present), my router and my Ring hub. 

However, the purpose of this article is not to inform you of my new changes, but rather to inform you of the issues to which I experienced when moving my NUC. I have a NUC7PJYH as my home server - it's low power but isn't the most powerful little machine. It does, however, support Linux really well and it runs on a variant of Ubuntu. My home server has many containers running on it, a web server, a PBX, a HomeBridge instance and a Home Assistant instance. Generally, it does quite a lot so I like to ensure I have regular backups of it (I've set up a cron to do snapshots of this daily). However, it wasn't until recently I noticed that my Zigbee devices were taking a lot of time to respond when I pressed buttons around the house. This only occurred after I moved the server to my office. I quickly started to realise why. Moving it to my office there were far fewer WiFi devices (I don't have a lot of WiFi devices in the house, most things are on the Ethernet) so I assumed it would have been better. But I had put the Zigbee USB stick on the top of my NUC. It struck me why in my server cupboard I had placed the Zigbee antenna in the corner away from the NUC - interference. Interference from what though? I hadn't even realised this, but the NUC actually has built-in Bluetooth and WiFi.

Having these enabled can seriously impact the performance of the Zigbee hub. I seriously recommend switching these off. Did you know that USB 3.x can also interfere too? Yup, the electrical signals cause a small portion of electromagnetic interference that causes issues with Zigbee. 

Meet ClickIt Embed! The almost perfected feature takes advantage of my new LightningJS parser and compiler, codenamed Pietro, to make embedding ClickIt code easy!

It makes it easier to present ClickIt files than using the editor and offers a nice new alternative to presenting HTML. Take a look below:

I've been working even more on ClickIt than I imagined I would be over the last few days. Today's update is a big one (again).

The old system of dragging files onto the Open item on the ribbon has been replaced by a new dialog interface that allows users to browse their computer or use drag and drop. It's much easier to understand and I'm thrilled with it. 

Extending on this feature is the new paste feature which allows copying HTML code and pasting it straight into the editor which will be compiled into HTML blocks, taking advantage of LightningJS.

Another major update that's new is more of a fix. This fix is for code blocks with attributes longer than the width of the code area. Now instead of pushing itself to the following line, the line extends horizontally. It's much more intuitive and as I say it's what I always wanted.

Another major fix was the printing of code feature. Printing has at last been fixed to display code as it is with spaces and so on. 

Since TYPO was introduced and then upgraded to TYPO v2, typing has become a big thing for YASS. The standard library uses types on all built-in functions and all of my own programs are moving to be typed. One of the new features that is coming to ZPE in the near future is forcing typing and disallowing the execution of programs that don't use strong typing. But before that, ZPE is looking to bring union types.

Union types are crucially important for return types on functions which may need to return two separate data types. Rather than using the very abstract mixed type, these functions would use the union type. Assume the following function which returns an integer (the index of the found item) or a Boolean false if the item is not found. Using types with this function currently means specifying a mixed type to allow both integer or Boolean return types:

function linearSearch(string searchTerm, list items) : mixed

This same program could be written with union types as shown below:

function linearSearch(string searchTerm, list items) : integer | boolean

This is a much less abstract and more concrete solution to the problem, forcing the return type to be one or the other. This is the first step in union types, as union types will also come to variable declaration, but this isn't planned to come any time soon.

The next generation of ClickIt is live! ClickIt 3.0 is a huge improvement over previous versions.

With version 3 you can now import existing HTML pages and the new LightningJS parser and compiler will transform it to an AST. ClickIt can then transform this AST into ClickIt blocks, attributes and all. 

There is scope for further improvement which I will be looking to bring over the next few days or so as well.

With this update, ClickIt is back as a major project for me and it now gets a separate page on my website as well as a menu item! 

I am committed to improving ClickIt now that I have spent more time with it recently and I am looking to introduce cloud storage to it, as well as the ability to add project assets. I'm also ironing out issues as they happen for the first time. More HTML elements will continue to be added as I find the time to do so.

Powered by DASH 2.0