AI Efficiency and the Acceleration of Everything

Imagine preparing for a really long hike. There are bears and cougars out there. You need to decide whether to bring bear mace and a gun. They take up precious space in your backpack and over a long hike will slow you down, make you use more energy, and therefore food. Traditionally, you would just bring it because you had no idea where the creatures were. Now, imagine you discover an AI mapping startup which is able to track and predict with their AI models,where the bears and cougars would be in any forest. You sign up and use their technology to help you design your route so you avoid the danger. The AI can make you more efficient! You can avoid bringing the mace and gun, have a lighter pack, and even bring less food than necessary because you will expend less energy. You can walk faster with higher confidence you won’t be mauled. You sign up and use their AI tech for your hike preparation.

This is just an example I made up to illustrate how AI can make you more efficient. Predictive models can increase the efficiency of anything because they can help you avoid waste. Many companies are promoting AI as a way to increase efficiency. Large Language Models and products like ChatGPT have enabled many people to be more efficient in a lot of ways. ChatGPT allows you to write, search, and learn faster. Github CoPilot allows you to write software faster. Adobe integrated generative AI to allow you to edit images faster. Faster. Faster. Faster.

You go out there on your hike, confident you will avoid bears and cougars. You make it there and are happy you used their technology. You use the tech for the next 10 hikes. On the 11th hike, about half way through, you are mauled by a cougar. You die. The model was wrong. You weren’t prepared!

On the other side of the efficiency coin is resiliency. As you make anything more efficient, you lose resilience. Resiliency requires extra resources “just in case”. It Requires bringing the bear mace and gun. Resiliency is always a cost: As you increase resiliency, you decrease efficiency.

Nobody is talking about using AI for resiliency. Nobody wants to pay for a technology which will increase their costs in the long run. Nothing about our society promotes resilience in the business world. Economists don’t have a language or model for it!

When you take efficiency to the extreme, you get deadly fragility. Lean manufacturing has been promoted and pushed for over 30 years globally. Just-in-time manufacturing means you keep low stocks and create needed items on demand. Each supply chain was tuned to support a very narrow variance of demand. When the COVID pandemic spread, demand for some things spiked while others fell dramatically. The variance increased and our supply chains crumbled. Shipping costs spiked non-linearly.

https://fredblog.stlouisfed.org/2022/12/the-swell-of-shipping-costs/

More recently you see this push for efficiency in the Boeing 737 door plug incident. During a flight on January 5th 2024 from Portland Oregon to Ontario, the door plug on a Boeing 737 came off mid flight. After the incident, inspections of planes from several airlines found that bolts holding the plug weren’t tight enough. Fortunately nobody was injured. While the reason is still unknown during the time this article is written, there are hints that the cost cutting mindset of Boeing in the name of efficiency costs safety.

The only language I see from business leaders when talking about the climate crisis is efficiency. They say we need more energy efficiency. That we need to be more efficient in using fossil fuels. However, efficiency has a paradox called Jevons Paradox. What this paradox shows is as you increase the efficiency of using a resource, you end up using more of that resource in aggregate! This paradox works because demand/supply curves are non-linear. Efficiency reduces the cost of using a resource for any given purpose. However, a small cost decrease could increase the demand for that resource non-linearly!

This can be easily demonstrated with the CAFE standards. CAFE standards increased fuel efficiency of all vehicles. This allowed you to drive further for the same cost. A road trip which would have been beyond your budget is now possible. However, what is likely to happen in practice is the miles per person increasing over time.

https://fred.stlouisfed.org/graph/?g=lls

Even though per capita expenditure of energy went down over time because of efficiency gains, aggregate usage went up! That’s Jevons paradox. Demand is non-linear to price. People who couldn’t afford certain forms of energy in the past can now afford to use them.

Efficiency gains will NOT reduce our usage of fossil fuels in the aggregate. We need fossil fuels to be more expensive to use, not less.

So AI’s sales pitch on increasing efficiency, if it works, will reduce our resilience and accelerate our usage of every kind of resource. We are currently experiencing the “Great Acceleration”. Everyone should understand you can’t have exponential growth on a finite planet without end.

After 50 years of the Limits to Growth study from the Club of Rome, the re-calibration of the World3 model is shocking.

https://onlinelibrary.wiley.com/doi/full/10.1111/jiec.13442

We are approaching the physical limits of this planet. In a 2015 paper, Johan Rockström found we already went beyond four of the nine boundaries keeping our planet hospitable to life.

Utilizing AI as an approach to increase our efficiency is a kiss of death. We now need resiliency, not efficiency. We don’t need to further bend upward the exponential curve of growth.

An interesting question I have is: Is it possible to utilize AI to increase resilience? Can it be used on the other side of the coin? Our current approaches are about fitting curves to distributions. I believe we need a completely different approach. Which does not rely solely on optimization but also survivability.

Disclaimer: No AI was harmed in the making of this article.

Accessing Dell Server Remotely Using iDRac

Do you need to access a Dell server remotely using iDRac outside it’s intranet? iDRac, or Integrated Dell Remote Access Controller, is a powerful tool that allows for remote management. It uses a dedicated IP and operates through a web interface on HTTPS (port 443).

Here’s a step-by-step guide to set up remote access:

Create an SSH Tunnel:

First, establish an SSH tunnel to a machine you have access to. This can be done using the following

ssh -L 8443:<iDRac IP>:443 -L 5900:<iDRac IP>:5900 -L 5901:<iDRac IP>:5901 my@publicmachine.com

This command maps port 8443 on your local machine to port 443 on the iDRac server (and similar mappings for ports 5900 and 5901, which are often used for virtual console access).

Troubleshooting the 400 Error:

If you attempt to connect through your local browser using https://localhost:8443, you might receive a 400 error. This occurs because iDRac redirects using its IP address. To resolve this, you need to redirect traffic from the iDRac IP to your localhost.

Execute the following command:

sudo iptables -t nat -A OUTPUT -d <iDRac IP> -j DNAT --to-destination 127.0.0.1

Replace with the actual IP address of your iDRac.

Accessing iDRac:

Finally, navigate to https://<iDRac IP>:8443 in your browser. This should successfully redirect and allow you access to the iDRac interface.


Remember, remote server management can be complex and might require specific network configurations based on your organization’s setup. Always ensure you have the necessary permissions and understand the security implications of remote access.

Bugs Faster than the Speed of Thought

I got access to OpenAI’s GPT-3 last year and one of the first things I did was prompt it with a C++ interface struct and have it write the implementation. I was generally surprised by the results. Some of the completions were even code that was clearly from Github projects with valid Github links. My thought was “Wow, this would be an impressive auto-complete”. Today, Github just released Copilot, which is a GPT-3 powered auto-complete feature. It’s very impressive.

Anybody who has created a production AI system will know that only 20% of the work goes into creating the models, the scaffolding around it is the remaining 80%. I’m sure it took a lot of work to go from using the GPT-3 playground to something well integrated into an IDE like Copilot.

Being well integrated is key to the success of Copilot and it’s going to be used by hundreds of thousands if not a million programmers very quickly. Which is precisely what makes it so dangerous.

In Code Complete, Steve McConnell wrote extensively on defects in production systems. The industry average defect rate is about 15 – 50 bugs per 1000 lines of code. Some techniques used by NASA can get bug count to almost zero. Open source software likely has MORE bugs per 1000 lines of code because most open source projects have 1 developer and no eyeballs.

Copilot isn’t magic and will perform worse than a human coder on average. If it’s trained on the gigantic, 100 million project corpus of Github projects, it will most certainly have more than 50 bugs per 1000 lines of code. This is faster than Copy-Pasting code snippets because Copilot will auto-complete code that will likely compile and require less human correction. All programmers understand why copy-pasting code is bad. It likely introduces bugs. With Copilot, bugs will be transmitted faster than the speed of thought.

What can the consequences of buggy software being written at a breakneck pace be? The fatal Boeing 737 MAX8 crash involving Ethiopian Airlines in 2019 was the result of AI gone wrong. They took a safety system that was supposed to only engage in critical situations and expanded it to noncritical situations. Black box systems kill. Imagine this for a second, building AI systems is the future of software. You will no longer write algorithms but the scaffolding of learning systems. Now imagine your scaffolding itself is written mostly by Copilot. Bugs will propagate in new ways, via systems that build systems.

Building software is building a small world. It’s about meaning, and we know GPT-3 doesn’t understand meaning. It won’t understand your problem either. When programmers get used to auto-complete code that compiles, how deep will they go into it? Will they review it carefully? Building human-machine interaction is hard and you don’t want humans writing software asleep at the wheel.