Hardware

Autonomy, Innovation and Excellence at your Fingertips

Our Approach

An SSL-certified controller manages every move of the brakes, doors, throttle, and steering. This then communicates to our autonomous control system that uses a combination of powerful computers and digital signal processors.

Other hardware elements form part of the perception stack used to create an awareness of the vehicle’s surroundings. Together all of these elements allow the vehicle to drive fully autonomously in a controlled environment.

Artificial Intelligence

Our Artificial Intelligence system can identify foreign objects in the path of the vehicle.

HD Camera

High resolution cameras complete the picture. We use these in conjunction with AI algorithms to classify nearby objects and predict their movements.

U

4D RADAR

Real-time 4D RADAR data detects hundreds of objects and calculates their relative velocities to foresee potential collisions way in advance.

 

Virtual Test & Build Safety

With our design philosophy of using the right technology in the right place we incorporate layers of safety into our designs. This means low level ISO certified hardware talking to the safety critical systems with multiple layers of machine monitoring to ensure a safe and robust system at all times.

Collision Avoidance

If objects are detected in the path of a vehicle, CAVSense can make decisions about how to avoid a collision. It can either bring the vehicle to a halt or it can calculate a new route around the object.

This is done whilst obeying strict rules to ensure vehicles make safe decisions. CAVSense also uses geofencing to ensure that if a vehicle deviates off the path it is spotted instantly and the necessary corrections are made.

Vehicle Detection

The Cavonix video processing system uses Ethernet HD cameras to detect and classify objects such as other vehicles and pedestrians.

Detected objects can be tracked and from the tracking data, we can determine their direction of movement. This informs decision-making at junctions and collision avoidance.

EVIE AI Radar Vision

Evie’s AI-enhanced radar vision allows vehicles to perceive their environment using radio waves. This can be beneficial in challenging weather conditions such as rain, snow, or fog, where visual-based sensors like cameras or LiDAR may be limited. AI-enhanced radar sensors have longer-range capabilities compared to other visual sensors allowing autonomous vehicles to detect objects or obstacles from a greater distance. This can be particularly useful for high-speed driving scenarios or when navigating through complex environments. AI-enhanced radar vision can complement other sensors, such as cameras and radar, to provide a multi-modal perception system for autonomous vehicles.

CAV Hardware

can data acquisition technology

CAVDaq is a powerful and flexible data acquisition board that has been designed with multi-purpose versatility to standalone.
CAVDaq works seamlessly with CAVLab by translating analogue and discrete signals into digital data from the world around us, allowing us to display, store and analyse all content, working in tandem with a computer.
Using CAVDaq, we can connect to a vast range of sensors, GPS systems, 4G and 5G radio telemetry and CAN bus devices. 

CAVSense uses a combination of sensors to provide the full picture:
AI identifies foreign objects in the path of the vehicle; GPS is essential for determining the location of autonomous vehicles; 4D RADAR detects hundreds of objects and calculates their relative velocities, and HD Cameras are used in conjunction with AI algorithms to classify nearby objects and predict their movements.

CAVgps Technology

The CAVGPS technology is an in-house developed hardware that provides accurate, reliable GPS, essential in determining the location of autonomous vehicles.
The CAVGPS receiver can be retrofitted to our CAVDAQ board, providing a low-cost RTK (Real-Time Kinematics) GPS system with positioning accurate to 1cm. This can be NTRIP (using 5G connection) or radio-based connectivity for areas up to 10km radius and this allows us to have finite control over the autonomous systems.

Driven by Cavonix

The biggest challenge faced in the autonomous industry today is time and money.

Currently, autonomy isn’t cheap, and the development life cycles can span years. These are the two areas that Cavonix has the advantage – it all comes down to code. 

Cavonix has created their own development environment that enables development speeds of up to 100x faster than the nearest competitor for real-time applications. Their unique multi-threading system enables them to utilise up to 100% of the processing power of an embedded computer system meaning they can implement an entire autonomous stack on a single embedded computer system.

This means we can deliver our systems in record time, on budget.

Cavonix has created two complete solutions:

“Lift-and-Shift” full Autonomous Control System (ACS), as well as a Sensor Fusion (SF) system that will enable autonomous driving on any vehicle, using any fuel, together with a full Fleet Management System (FMS) to monitor your vehicles and log vital data along the way.

Secondly, we have created an AI-powered Safety System that can detect obstacles such as workers or pedestrians and can warn operators of their surroundings and blind spots. For example, this could be fitted to an excavator to warn operators of other workers nearby.

These systems have been fully developed and tested and are now ready to come to market.

Technology

Cavonix has created its own development environment that enables them to develop up to 100x faster for real-time applications.

Value

We can implement the entire autonomous stack on a single, modestly priced, embedded computer system.

Efficiency

The unique multi-threading system enables us to utilise 100% of the processing power of an embedded computer system.

Autonomous Control System (ACS)

We have created a “Lift-and-Shift full Autonomous Control System (ACS) as well as a Sensor Fusion (SF) system that will enable autonomous driving on any vehicle, using any fuel, together with a full fleet Management System (FMS) to monitor your vehicles and log vital data along the way.

AI-Powered Safety System

Our AI-powered safety system can detect obstacles such as pedestrians, animals, environmental obstructions et cetera,  and can warn operators of their surroundings and blind spots. For example, this could be fitted to an excavator to warn operators of other workers nearby.

Why No LiDAR?

While LiDAR (Light Detection and Ranging) technology has proven to be valuable in various applications, it also has some limitations and disadvantages.

High Cost

LiDAR systems can be quite expensive, making them less accessible for individuals or organizations with limited budgets. The cost includes the acquisition of hardware, software, and maintenance expenses. This can hinder widespread adoption and limit its use to specific industries or well-funded projects

Limited Range and Resolution

LiDAR systems are constrained by their range and resolution capabilities. The range can vary depending on the specific equipment used, but typically, LiDAR’s effectiveness diminishes at long distances. Similarly, the resolution, which determines the level of detail captured, decreases with distance. This can impact the accuracy and quality of the collected data, particularly for distant or small objects.

Vulnerability to Weather Conditions

LiDAR heavily relies on the ability of its emitted laser beams to bounce back from objects to create a 3D representation. However, certain weather conditions such as heavy rain, fog, or snow can interfere with the accuracy of the measurements. These conditions may scatter or absorb the laser beams, resulting in reduced data quality or even complete data loss.

Limited Range and Resolution

LiDAR struggles to penetrate certain materials, particularly dense foliage, dense forests, or highly reflective surfaces like mirrors. In such cases, the laser beams may not be able to reach the target or may produce distorted data due to multiple reflections. This limitation can impact LiDAR’s usability in applications like mapping vegetation or densely forested areas.

Data Processing and Interpretation

The raw data captured by LiDAR systems can be voluminous and complex, requiring significant computational resources for processing and interpretation. Extracting meaningful information from the point cloud data requires advanced algorithms and specialized software. This can pose challenges in terms of time, expertise, and computing power, making it less accessible for users without sufficient technical knowledge or resources.