JetBot: Low-Cost Open-Source 2-Wheel Robot by NVIDIA
Project Summary
A comprehensive guide for building and programming your NVIDIA Jetson-powered robot.
Project Overview
This project uses the NVIDIA Jetson platform to create an intelligent robot capable of autonomous navigation, object detection, and collision avoidance. It combines edge AI processing with robotics hardware for implementing AI-powered robotics applications at the edge.
Hardware Components
Computing Platform: NVIDIA Jetson Nano 4GB/2GB (Developer Kit Version)
Chassis: Waveshare JetBot AI Kit chassis with 3D-printed components
Motors: 2x TT Gear Motors with 6:1 gear ratio
Power System:
18650 Lithium Battery Pack (7.4V)
Waveshare Motor Driver Hat (4tronix)
Sensors:
Camera: Raspberry Pi V2 Camera (8MP) or IMX219-77 camera module
Distance sensors: VL53L0X Time-of-Flight sensor
IMU: Optional MPU9250 9-DOF sensor
Additional Components:
WS2812 RGB LED array
OLED display (optional)
Assembly Instructions
Step 1: Jetson Setup
Flash JetPack 4.6+ to microSD card using NVIDIA SDK Manager
Complete initial Ubuntu configuration
Install JetBot software:
git clone https://github.com/NVIDIA-AI-IOT/jetbot
cd jetbot
sudo python3 setup.py install
Step 2: Hardware Assembly
Mount Jetson Nano to chassis baseplate
Connect motors to motor controller using PH2.0 connectors
Install camera module using CSI-2 ribbon cable
Connect battery to power distribution board
Step 3: Software Configuration
Configure camera interface:
from jetbot import Camera
camera = Camera.instance(width=300, height=300)
Initialize motor controller:
from jetbot import Robot
robot = Robot()
AI Model Implementation
Used Models
Collision avoidance: ResNet18 trained on synthetic dataset
Road following: Dronet-style CNN with regression output
Training Process
Collect dataset using Jupyter notebook interface
Transfer learning using PyTorch
Training workflow:
model = models.resnet18(pretrained=True)
model.fc = nn.Linear(512, 2)
Deployment
Convert model to TensorRT format
Optimize for Jetson using:
import torch2trt
model_trt = torch2trt(model, [data])
Control Software
from jetbot import Robot
import time
robot = Robot()
# Advanced movement with acceleration
def smooth_move(speed=0.5, duration=1.0):
robot.left_motor.value = speed
robot.right_motor.value = speed
time.sleep(duration)
robot.stop()
# Object-aware movement
def intelligent_move(obstacle_distance):
if obstacle_distance > 20: # cm
robot.forward(0.4)
else:
robot.left(0.3)
Operation Guide
Power On:
Switch battery to ON position
Wait for status LED illumination
Connecting:
Access via browser at
http://jetbot-ip:8888
Password:
jetbot
Autonomous Mode:
Run Jupyter notebook for:
Live object detection
Collision-free navigation
Road following
Advanced Features
Real-time Object Detection using SSD-MobileNet
Gesture Control using MediaPipe models
ROS Integration (Melodic/Noetic) for SLAM
Web-based Remote Control with video streaming
Performance Optimization
Set Jetson to 10W mode:
sudo nvpmodel -m 0
Enable GPU-accelerated video decoding
Use mixed precision quantization
Implement model pruning with TorchPruner
Troubleshooting
Camera Not Detected:
sudo systemctl restart nvargus-daemon
Motor Stuttering:
Check battery voltage (>6.5V)
Verify PWM frequency settings
High Latency:
camera = Camera.instance( fps=15 ) # Reduce frame rate
Future Enhancements
Multi-modal fusion (camera + LiDAR)
ROS 2 Humble integration
Federated learning capabilities
5G connectivity for edge-cloud hybrid processing
Additional Resources from Official Guide
Overview
Project Info
Engage with this Project
License
MIT License
A permissive license that is short and to the point.