Skip to main content

Nvidia Announces New Drive CX And PX Automotive Tech At CES

Right after the company announced the new Tegra X1 mobile SoC at a press conference in Las Vegas, Nvidia's CEO, Jen-Hsun Huang, went on to announce the company's plans in the automotive space.

As it turns out, actually, it will be doing quite a bit, and the way we see it, it may even be what the automotive industry needs.


The first announcement in the category was the Nvidia Drive CX, which the graphics card maker calls a "Digital Cockpit Computer." The idea behind it is to be a single central computing system that takes care of all the displays inside the car. Nvidia believes that in the future, cars will have more and more screens built in, and having all of it managed from a central computer is what will make it shine. Today's high-tech cars have about 700 thousand pixels that need to be pushed, which isn't that much. Despite that, Nvidia built the Drive CX to be powerful enough to push up to 16.6 million pixels. This makes sense, though, as by adding a couple of passenger displays you can easily end up with a very high pixel count. The Drive CX is based on the just-announced Tegra X1.


The most impressive part of the Nvidia Drive CX isn't the hardware though, but rather what Nvidia intends on doing with it. For it, the company built a runtime, Nvidia Drive Studio, which is basically a game engine but then for car interfaces. It offers a huge range of customization options, and Nvidia showed off some of the things that it is capable of; it’s nothing short of impressive. The demos included navigation with 3D maps, dynamic lighting to keep you focused on what's necessary, ambient occlusion, shadows, and more. Nvidia also demonstrated the customizability options that the gauges would have in the instrument cluster, and indicated that the car would be able to have multiple profiles, with one for each driver. Because the Drive CX is so powerful, some of the different skins for the instruments even offered advanced lighting features, including sub-surface scattering when simulating surfaces like car paint or carbon fiber, which cannot simply be pasted over a polygon as a simple texture.


Naturally, all of it also tied in nicely with the onboard infotainment system, which also offered a wide array of customization options. The idea behind Nvidia Drive Studio, however, isn't to provide a ready-to-go system, but rather to work as a platform that car manufacturers can use to create their experience for drivers. Certainly, however, once the APIs open up, or someone cracks them, there will be a lot of 3rd party modding going on.


Of course, what Nvidia can do in the automotive space doesn't end there. The company also announced the Nvidia Drive PX, which is a very advanced auto-pilot for cars. It is also based on the Tegra X1 SoC, but rather than using just one, it uses two, giving the Drive PX a grand total of 2.3 teraflops of computational power. The system is designed to work with 12 cameras, and can process up to 1.3 gigapixels per second.


The Drive PX takes a fundamentally different approach to an auto-pilot compared to what we've seen before. Rather than using lasers, radar, and cameras, it uses these twelve cameras and a massively complicated "Deep Neural Network." Using these two things in tandem, the Drive PX is able to recognize various objects on the road, ranging from pedestrians, occluded pedestrians, cars, vans, and more. It can also recognize speed cameras and police cars, which we don't need to tell you can be very useful.


The neural network is also something that's constantly being updated. If a car doesn't recognize an object, or if an object turned out to be something different upon closer inspection than the Drive PX originally thought, it will send the image date to Nvidia, which will process it and add the link in the neural network in the next update of the Drive PX system, which means that if one car on the road doesn't recognize a situation, that knowledge gets contributed to the brains of every car using the system. This deep learning system is fully automated, too. With today's technology, a car can have sufficient training after about 40 hours of training, while in the past this would take months, if not years of training.


Also in the Drive PX system is a mapping system for a surround view. Using the multiple cameras the computer can generate a single image of what's going on around the car and show that on a display in a chasing point of view, like you would in a game. Of course, it wouldn't be safe for driving, but for maneuvers such as parking it could be extremely useful.
Because the Drive PX system also spatially maps the entire environment around itself, it can drive by itself, too. In fact, it can drive itself in almost any situation, ranging from long distances on the highway to city driving to parking itself. It even has an Auto-Valet system, with which the car can venture into an unknown parking garage and find a spot to park in, and park in that spot. Unlike BMW's system, which the German auto-maker is showing off this week as well, Nvidia's Drive PX won't need a map of the garage.


One thing we find noteworthy with this auto-pilot technology is how various different groups take fundamentally different approaches to solving the same problem. Some companies choose to use lasers, radars, and ultrasound in order to detect obstacles and map the environment near them, while Nvidia, in this case, has opted, rather radically, to not use those technologies. The company didn't mention specifically that it wouldn't be using them, but from the demonstrations and the information we got at the press conference it's more than clear that the neural network is the heart of their technology. The other systems might complete the same task, but they don't appear to have any learning technique in-place and are simply programmed to recognize various objects by hand. Nvidia's neural network appears to make the sensors that other companies use redundant.


To finish off, we'd like to mention the importance of this technology. On this topic, the fact that Nvidia uses a hyper-advanced neural network or a highly customizable infotainment system isn't even the most important thing we found out today. What we consider the most important thing of this announcement is that Nvidia is doing it in the first place because it will be available for all car manufacturers to use, and there can be a single system which is developed to whole new heights, rather than re-inventing the wheel every time a new model of a car comes out. A system that is adopted by the entire industry, at large, is exactly what is needed in order to bring this level of technology into cars in the mainstream, because it also offers an upgrade path for the future. A car is something that's good for 10, 15, or more years, while the technology inside it would become obsolete after just a few year

Comments

Popular posts from this blog

PCB Design using EAGLE – Part 1: Introduction to EAGLE and Software Environment

Have you ever come across a situation where you prototyped a project on a solderless breadboard and liked it so much that you want it on a PCB? Well, read on! So far we have been writing software programs, building binaries out of them and executing them on micro-controllers. It’s time to get physical now! This post, and a couple of upcoming posts will deal with this very thing – how to realize your project in hardware. We’ll deal with PCBs, and also learn how to design and fabricate them. PCB of NI myRIO (Source: National Instruments) Introduction to PCB Design If you are an electronics hobbyist you might have probably designed many electronic circuits and even prototyped them on a breadboard. Now it’s time to step up to the next level. Let’s design the same on a PCB. This article and a couple more of them will be addressing the topic of PCB designing. There are many types of circuits that you can design on a PCB – like analog, digital, RF – and the PCB layout may make

The ADC of the AVR

Analog to Digital Conversion Most real world data is analog. Whether it be temperature, pressure, voltage, etc, their variation is always analog in nature. For example, the temperature inside a boiler is around 800°C. During its light-up, the temperature never approaches directly to 800°C. If the ambient temperature is 400°C, it will start increasing gradually to 450°C, 500°C and thus reaches 800°C over a period of time. This is an analog data. Signal Acquisition Process Now, we must process the data that we have received. But analog signal processing is quite inefficient in terms of accuracy, speed and desired output. Hence, we convert them to digital form using an Analog to Digital Converter (ADC). Signal Acquisition Process In general, the signal (or data) acquisition process has 3 steps. In the  Real World , a  sensor  senses any physical parameter and converts into an equivalent analog electrical signal. For efficient and ease of signal processing, this analo