This scenario describes simulating multiple agent dynamics and imagery on a single machine with Python API. Multiple vehicles can be added to the vehicle_model block of FligthGogglesClient.yaml. Check Python Parameters for the detailed instructions about writing a configuration file.
This is the example configuration of multiple agents:
state:
sceneFilename: "Stata_GroundFloor"
camWidth: 640
camHeight: 480
camFOV: 70.0
camDepthScale: 0.20
renderer:
0:
inputPort: "10253"
outputPort: "10254"
camera_model:
0:
ID: cam0
channels: 3
renderer: 0
freq: 30
outputShaderType: -1
hasCollisionCheck: False
1:
ID: cam1
channels: 3
renderer: 0
freq: 30
outputShaderType: -1
hasCollisionCheck: False
vehicle_model:
uav1:
type: "uav"
initialPose: [-6.5, -18.5, -2.0, 1.0, 0, 0, 0]
imu_freq: 200
cameraInfo:
cam0:
relativePose: [0.2, 0, 0, 1, 0, 0, 0]
car1:
type: "car"
initialPose: [-6.5, -18.5, -0.2, 1.0, 0, 0, 0]
imu_freq: 200
cameraInfo:
cam1:
relativePose: [0, 0, 0, 1, 0, 0, 0]
uav2:
type: "uav"
initialPose: [-6.5, -12.5, -2.0, 1.0, 0, 0, 0]
imu_freq: 200
Note that the vehicle model can be executed without any attached camera or object.
This is the sample code for multiple agents simulation:
import numpy as np
from IPython.display import HTML, display
from flightgoggles.env import flightgoggles_env
if __name__ == "__main__":
env = flightgoggles_env()
for i in range(200):
env.proceed_motor_speed("uav1", np.ones(4)*1133.0, 0.01)
env.proceed("car1", 1., 1., 0.01)
env.proceed_motor_speed("uav2", np.ones(4)*1133.0, 0.01)
ani_set = env.plot_state_video()
if "cam0" in ani_set.keys():
display(HTML(ani_set["cam0"].to_html5_video()))
if "cam1" in ani_set.keys():
display(HTML(ani_set["cam1"].to_html5_video()))
env.close()
Fig 1. The result of the example code (cam 0)
Fig 2. The result of the example code (cam 1)