In this scenario, we consider the problem of simulation of multiple agents using multiple host computers on a local network. For connecting computers that are not on the same network, please consider Setting up a VPN Server and Client.
In order to use multiple machines, FlightGoggles client in each machine needs to be started with different input and output ports. Also, the IP address of the Python client has to be passed as the argument.
These are the example script to execute multiple renderers in different machines:
./FlightGogglesv3.x86_64 -client-ip "client.ip.address" -input-port 10253 -output-port 10254 ./FlightGogglesv3.x86_64 -client-ip "client.ip.address" -input-port 10255 -output-port 10256
These ports configurations have to be added in the renderer
block of FlightGogglesClients.yaml.
After registering renderers, the user can configure which renderer that each camera will use in the camera_model
block. It is possible to simulate a stereo camera by attaching two cameras into one vehicle model with different relative poses. The unattached camera will not update its position and record the video in the initial position.
state: sceneFilename: "Stata_GroundFloor" camWidth: 640 camHeight: 480 camFOV: 70.0 camDepthScale: 0.20 renderer: 0: inputPort: "10253" outputPort: "10254" 1: inputPort: "10255" outputPort: "10256" objects: 0: ID: uav1 prefabID: Blackeagle size_x: 5 size_y: 5 size_z: 5 camera_model: 0: ID: cam0 channels: 3 renderer: 0 freq: 30 outputShaderType: -1 hasCollisionCheck: False 1: ID: cam1 channels: 3 renderer: 1 freq: 30 outputShaderType: -1 hasCollisionCheck: False 2: ID: cam2 channels: 3 renderer: 1 freq: 30 outputShaderType: -1 hasCollisionCheck: False initialPose: [-10.5, -18.5, -2, 1.0, 0, 0, 0] vehicle_model: uav1: type: "uav" initialPose: [-6.5, -18.5, -2, 0.707, 0, 0, -0.707] imu_freq: 200 objectsInfo: uav1: relativePose: [0, 0, 0.3, 0.707, 0, 0, -0.707] cameraInfo: cam0: relativePose: [0.2, 0, 0, 1, 0, 0, 0] cam1: relativePose: [-0.2, 0, 0, 1, 0, 0, 0]
This is the sample code to test the multiple machine configuration:
import numpy as np from flightgoggles.env import flightgoggles_env if __name__ == "__main__": env = flightgoggles_env() for i in range(200): env.proceed_motor_speed("uav1", np.ones(4)*1133.0, 0.01) ani_set = env.plot_state_video() if "cam0" in ani_set.keys(): display(HTML(ani_set["cam0"].to_html5_video())) if "cam1" in ani_set.keys(): display(HTML(ani_set["cam1"].to_html5_video())) if "cam2" in ani_set.keys(): display(HTML(ani_set["cam2"].to_html5_video())) env.close()
Fig 1. The result of the example code (cam 0)
Fig 2. The result of the example code (cam 1)
Fig 3. The result of the example code (cam 2)
Attachments:
multi_machine_cam1.mp4 (video/mp4)
multi_machine_cam2.mp4 (video/mp4)
multi_machine_cam2.mp4 (video/mp4)