Even though we hear the terms artificial intelligence (AI) and machine learning (ML) almost daily, there’s still a lot of confusion about the actual meaning of these designations. In a nutshell, AI is an umbrella term embracing technologies that empower machines to simulate human behavior. ML is a subset of AI that allows machines to automatically learn from past data and events without explicitly being programmed to do so.
Another perception is that AI is a relatively new development, whereas, in reality, scientists and engineers have been working on AI-related technologies for decades. In fact, the founding event that led to AI as we know it today was a Dartmouth workshop—the Dartmouth Summer Research Project on Artificial Intelligence—in 1956.
We can divide the development of AI (and ML) into two eras. During the first era, until circa 2012, AI computational requirements doubled approximately every two years, roughly tracking Moore’s Law. However, around 2012, developments in AI architectures and algorithms led to an inflection point that marked the beginning of the second (modern) era of AI, in which compute requirements for enterprise-level AI systems started to double approximately every three-and-a-half months (Figure 1). Fortunately, this demand for computational power can be satisfied using the tremendous XPU (CPU, GPU, FPGA, etc.) and memory resources made available by modern cloud computing environments.
Figure 1: The two eras of AI compute. (Source: OpenAI)
Today, AI and ML technologies are becoming ubiquitous, from handwriting recognition applications on tablet computers to natural language speech recognition and generation in smart appliances, machine vision with object detection and recognition capabilities in robots, predictive maintenance applications in the automotive industry—the list goes on.
Most recently, ML has started to appear in PCB layout applications. Unfortunately, when they hear this, many PCB designers have a knee-jerk reaction of, “Oh no! You’re trying to take my job away.” In fact, nothing could be further from the truth. ML can take away all the boring parts of the job, enabling PCB designers to focus on the interesting portions of the design they do best.
Take a moment to think about your daily computer-based activities. On occasion, you probably find yourself performing the same simple formatting task repeatedly. Of course, recording a macro to implement a sequence of operations is possible, but this falls over when any small variation is required. Pattern detection, pattern recognition, and pattern matching are things that ML does well—not just patterns in a graphical or visual sense, but any patterns, such as patterns of action. ML is also really good at doing the monotonous stuff people hate to do. This is where ML will start in PCB layout and morph from there.
For example, in the case of your repetitive formatting activities, a PCB design system augmented by ML could learn the initial sequence by watching what you were doing, and subsequently, learn variations to that sequence as they came along, quickly assuming the responsibility of implementing these boring, time-consuming, and error-prone activities. If you start to perform the same formatting activity on a future design, the ML could immediately recognize what you were doing and offer to take over. Even better, if you start a repetitive formatting task that is new to you but that someone else in your organization has previously performed, the ML system could bring that experience into play.
Some tasks in the actual layout can leave you frustrated. The simplest example is placing a complex ball grid array (BGA) component along with its dependencies, such as decoupling capacitors and breakout. This might be something you’ve done multiple times before, and understandably, you find little joy in performing this task. Suppose that an ML-augmented system, which had been trained on all your previous designs, recognized this component and immediately presented you with one or more possible implementation scenarios.
Once again, even if you hadn’t previously worked with this particular component, you could reap the benefits of others’ designs, including all the tips and tricks they’ve learned over the years. For example, the vendor may have originally trained the ML system using a dataset comprising tens of thousands of generic designs. This original training may subsequently have been augmented using all your own company’s previous designs. Now, you can realize the benefits of all this earlier work.
Moving above the component level, we’ve long been able to cut-and-paste portions of a design, for example, a power supply circuit. As clever as this capability is, it’s not particularly intelligent in the scheme of things. The ability to select a portion of the design and reuse it somewhere else while simultaneously changing things such as its footprint and characteristics (e.g., different voltage and current ratings in a power supply) would surely be more useful. ML’s ability to learn from tens of thousands of designs means that today’s “unintelligent” reuse can evolve into “intelligent” reuse capabilities.
To perform its magic, the ML needs to understand the designer’s goals for each part of the design—such as frequency, impedance, power consumption, acceptable IR drop, noise sensitivity, and so forth. When working on a portion of the design, the ML system will use various simulation tools running in the background (signal integrity, power integrity, thermal, etc.) to evaluate potential solutions to meet the desired goals.
With so many different considerations, it can be hard to wrap your brain around everything. For example, PCB layout may be a huge multi-objective optimization problem involving many design constraints (cost, power, performance, SI, PI, thermal, etc.) along with manufacturing constraints (trace widths, clearances, etc.). This explains why significant automation is unable to take hold and traditional optimization methods are untenable in trying to solve this type of large multi-dimensional optimization problem with thousands of constraints.
Consequently, PCB designers traditionally live by the rule of thumb (ROT), which usually means over-engineering everything, but this costs money and leaves performance on the table. Ask yourself, “How much over-engineering have I done on my designs in the past?” With the complexity and density of designs increasing dramatically, we can no longer afford to over-engineer. We need to move away from a “This is a pretty pattern” mentality and move toward a “Does it function properly and optimally, irrespective of what it looks like?” point of view. Think of all the effort expended making traces look nice and neat, and even-spaced. The time we’ve all spent (wasted) doing this over the years is phenomenal. If we start to think of PCB layout as a multi-objective optimization problem that is solvable using ML, then who cares how it looks? If it functions, it works, and you can manufacture it, ultimately, isn’t this really all we want?
There is, of course, a natural tendency to fight against the introduction of new technologies and new ways of doing things. We’ve seen this many times. When C compilers were first introduced, programmers were convinced that they could create smaller, faster code by hand using assembly language. Similarly, when language-driven logic synthesis first appeared on the scene, digital logic designers were convinced that they could handcraft smaller, faster designs using primitive logic gates and registers. This may even have been true in the early days, although compilers and synthesis engines quickly evolved to support better optimizations. However, this misses the point: moving to these new ways of doing things allowed programmers and logic designers to quickly explore larger solution spaces, realizing more optimal designs. This also applies to ML in the context of PCB layout; we really don’t want to be left behind when our competitors begin embracing this new way of doing things.
It is, of course, important not to be dazzled by the hype and to understand that we are still in the very early days of ML-augmented PCB layout. For example, we are not currently looking at applying this technology to 2,000- to 3,000-hour projects such as monstrous motherboards. ML simply isn’t in a place to handle this level of complexity right now. However, suppose you are working on a small board (or a small portion of a bigger design) like a power supply unit (PSU), fan controller, or a simple I/O board or front panel controller with almost nothing high-speed on it—something that might take you a week or two by hand, and you think, “I’m tired of wasting my time and creative juices on these mundane projects.” Now suppose that your ML-augmented PCB layout tool has been trained on 20,000 designs embracing similar functions, and it can explore this massive solution space to select the optimal implementation for your particular problem. You could, of course, do this yourself (come back and tell us when you’ve finished), but the ML will do it much faster and much better.
At the end of the day, if we discard all of the other arguments and focus on purely selfish motivations, one really good reason for using ML-augmented PCB layout is that it’s time, finally, to stop doing the boring things and to start doing only the cool things.
Jorge Gonzalez is a lead software engineer at Cadence. Luke Roberto is a principal software engineer at Cadence.
Download The System Designer’s Guide to… System Analysis by Brad Griffin along with its companion book The Cadence System Design Solutions Guide. You can also view other titles in our full I-007eBook library here.