refactor: move env_manager to new repo

This commit is contained in:
Ilya Uraev 2024-12-09 18:19:38 +03:00
parent 988800abc0
commit 223807255b
125 changed files with 4 additions and 17930 deletions

3
.gitmodules vendored Normal file
View file

@ -0,0 +1,3 @@
[submodule "env_manager"]
path = env_manager
url = git@gitlab.com:solid-sinusoid/env_manager.git

1
env_manager Submodule

@ -0,0 +1 @@
Subproject commit ab51561f9de32f5e1acdb913a4c7bac22b587c88

View file

@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View file

@ -1,5 +0,0 @@
from .lights import *
from .objects import *
from .robots import *
from .sensors import *
from .terrains import *

View file

@ -1,15 +0,0 @@
from .camera import CameraData
from .light import LightData
from .objects import (
BoxObjectData,
CylinderObjectData,
MeshData,
ModelData,
ObjectData,
PlaneObjectData,
SphereObjectData,
ObjectRandomizerData,
)
from .robot import RobotData
from .terrain import TerrainData
from .scene import SceneData

View file

@ -1,111 +0,0 @@
from dataclasses import dataclass, field
import numpy as np
@dataclass
class CameraData:
"""
CameraData stores the parameters and configuration settings for an RGB-D camera.
Attributes:
----------
name : str
The name of the camera instance. Default is an empty string.
enable : bool
Boolean flag to enable or disable the camera. Default is True.
type : str
Type of the camera. Default is "rgbd_camera".
relative_to : str
The reference frame relative to which the camera is placed. Default is "base_link".
width : int
The image width (in pixels) captured by the camera. Default is 128.
height : int
The image height (in pixels) captured by the camera. Default is 128.
image_format : str
The image format used (e.g., "R8G8B8"). Default is "R8G8B8".
update_rate : int
The rate (in Hz) at which the camera provides updates. Default is 10 Hz.
horizontal_fov : float
The horizontal field of view (in radians). Default is pi / 3.0.
vertical_fov : float
The vertical field of view (in radians). Default is pi / 3.0.
clip_color : tuple[float, float]
The near and far clipping planes for the color camera. Default is (0.01, 1000.0).
clip_depth : tuple[float, float]
The near and far clipping planes for the depth camera. Default is (0.05, 10.0).
noise_mean : float | None
The mean value of the Gaussian noise applied to the camera's data. Default is None (no noise).
noise_stddev : float | None
The standard deviation of the Gaussian noise applied to the camera's data. Default is None (no noise).
publish_color : bool
Whether or not to publish color data from the camera. Default is False.
publish_depth : bool
Whether or not to publish depth data from the camera. Default is False.
publish_points : bool
Whether or not to publish point cloud data from the camera. Default is False.
spawn_position : tuple[float, float, float]
The initial spawn position (x, y, z) of the camera relative to the reference frame. Default is (0, 0, 1).
spawn_quat_xyzw : tuple[float, float, float, float]
The initial spawn orientation of the camera in quaternion (x, y, z, w). Default is (0, 0.70710678118, 0, 0.70710678118).
random_pose_rollouts_num : int
The number of random pose rollouts. Default is 1.
random_pose_mode : str
The mode for random pose generation (e.g., "orbit"). Default is "orbit".
random_pose_orbit_distance : float
The fixed distance from the object in "orbit" mode. Default is 1.0.
random_pose_orbit_height_range : tuple[float, float]
The range of vertical positions (z-axis) in "orbit" mode. Default is (0.1, 0.7).
random_pose_orbit_ignore_arc_behind_robot : float
The arc angle (in radians) behind the robot to ignore when generating random poses. Default is pi / 8.
random_pose_select_position_options : list[tuple[float, float, float]]
A list of preset position options for the camera in random pose mode. Default is an empty list.
random_pose_focal_point_z_offset : float
The offset in the z-direction of the focal point when generating random poses. Default is 0.0.
random_pose_rollout_counter : float
A counter tracking the number of rollouts completed for random poses. Default is 0.0.
"""
name: str = field(default_factory=str)
enable: bool = field(default=True)
type: str = field(default="rgbd_camera")
relative_to: str = field(default="base_link")
width: int = field(default=128)
height: int = field(default=128)
image_format: str = field(default="R8G8B8")
update_rate: int = field(default=10)
horizontal_fov: float = field(default=np.pi / 3.0)
vertical_fov: float = field(default=np.pi / 3.0)
clip_color: tuple[float, float] = field(default=(0.01, 1000.0))
clip_depth: tuple[float, float] = field(default=(0.05, 10.0))
noise_mean: float | None = None
noise_stddev: float | None = None
publish_color: bool = field(default=False)
publish_depth: bool = field(default=False)
publish_points: bool = field(default=False)
spawn_position: tuple[float, float, float] = field(default=(0.0, 0.0, 1.0))
spawn_quat_xyzw: tuple[float, float, float, float] = field(
default=(0.0, 0.70710678118, 0.0, 0.70710678118)
)
random_pose_rollouts_num: int = field(default=1)
random_pose_mode: str = field(default="orbit")
random_pose_orbit_distance: float = field(default=1.0)
random_pose_orbit_height_range: tuple[float, float] = field(default=(0.1, 0.7))
random_pose_orbit_ignore_arc_behind_robot: float = field(default=np.pi / 8)
random_pose_select_position_options: list[tuple[float, float, float]] = field(
default_factory=list
)
random_pose_focal_point_z_offset: float = field(default=0.0)
random_pose_rollout_counter: float = field(default=0.0)

View file

@ -1,39 +0,0 @@
from dataclasses import dataclass, field
@dataclass
class LightData:
"""
LightData stores the configuration and settings for a light source in a simulation or rendering environment.
Attributes:
----------
type : str
The type of the light source (e.g., "sun", "point", "spot"). Default is "sun".
direction : tuple[float, float, float]
The direction vector of the light source in (x, y, z). This is typically used for directional lights like the sun.
Default is (0.5, -0.25, -0.75).
random_minmax_elevation : tuple[float, float]
The minimum and maximum elevation angles (in radians) for randomizing the light's direction.
Default is (-0.15, -0.65).
color : tuple[float, float, float, float]
The RGBA color of the light source. Each value ranges from 0 to 1. Default is (1.0, 1.0, 1.0, 1.0), which represents white light.
distance : float
The effective distance of the light source. Default is 1000.0 units.
visual : bool
A flag indicating whether the light source is visually represented in the simulation (e.g., whether it casts visible rays).
Default is True.
radius : float
The radius of the light's influence (for point and spot lights). Default is 25.0 units.
model_rollouts_num : int
The number of rollouts for randomizing light configurations in different simulation runs. Default is 1.
"""
type: str = field(default="sun")
direction: tuple[float, float, float] = field(default=(0.5, -0.25, -0.75))
random_minmax_elevation: tuple[float, float] = field(default=(-0.15, -0.65))
color: tuple[float, float, float, float] = field(default=(1.0, 1.0, 1.0, 1.0))
distance: float = field(default=1000.0)
visual: bool = field(default=True)
radius: float = field(default=25.0)
model_rollouts_num: int = field(default=1)

View file

@ -1,198 +0,0 @@
from dataclasses import dataclass, field
from env_manager.utils.types import Pose
@dataclass
class ObjectRandomizerData:
"""
ObjectRandomizerData contains parameters for randomizing object properties during simulation.
Attributes:
----------
count : int
The number of objects to randomize. Default is 0.
random_pose : bool
Flag indicating whether to randomize both the position and orientation of objects. Default is False.
random_position : bool
Flag indicating whether to randomize the position of objects. Default is False.
random_orientation : bool
Flag indicating whether to randomize the orientation of objects. Default is False.
random_model : bool
Flag indicating whether to randomize the model of the objects. Default is False.
random_spawn_position_segments : list
List of segments within which the object can be randomly spawned. Default is an empty list.
random_spawn_position_update_workspace_centre : bool
Flag indicating whether to update the workspace center during random spawning. Default is False.
random_spawn_volume : tuple[float, float, float]
The volume within which objects can be randomly spawned, defined by (x, y, z). Default is (0.5, 0.5, 0.5).
models_rollouts_num : int
The number of rollouts for randomizing models. Default is 0.
"""
count: int = field(default=0)
random_pose: bool = field(default=False)
random_position: bool = field(default=False)
random_orientation: bool = field(default=False)
random_model: bool = field(default=False)
random_spawn_position_segments: list = field(default_factory=list)
random_spawn_position_update_workspace_centre: bool = field(default=False)
random_spawn_volume: tuple[float, float, float] = field(default=(0.5, 0.5, 0.5))
models_rollouts_num: int = field(default=0)
random_color: bool = field(default=False)
@dataclass
class ObjectData:
"""
ObjectData stores the base properties for any object in the simulation environment.
Attributes:
----------
name : str
The name of the object. Default is "object".
type : str
The type of the object (e.g., "sphere", "box"). Default is an empty string.
relative_to : str
The reference frame relative to which the object is positioned. Default is "world".
position : tuple[float, float, float]
The position of the object in (x, y, z) coordinates. Default is (0.0, 0.0, 0.0).
orientation : tuple[float, float, float, float]
The orientation of the object as a quaternion (x, y, z, w). Default is (1.0, 0.0, 0.0, 0.0).
static : bool
Flag indicating if the object is static in the simulation (immovable). Default is False.
randomize : ObjectRandomizerData
Object randomizer settings for generating randomized object properties. Default is an empty ObjectRandomizerData instance.
"""
name: str = field(default="object")
type: str = field(default_factory=str)
relative_to: str = field(default="world")
position: tuple[float, float, float] = field(default=(0.0, 0.0, 0.0))
orientation: tuple[float, float, float, float] = field(default=(1.0, 0.0, 0.0, 0.0))
color: tuple[float, float, float, float] | None = field(default=None)
static: bool = field(default=False)
randomize: ObjectRandomizerData = field(default_factory=ObjectRandomizerData)
@dataclass
class PrimitiveObjectData(ObjectData):
"""
PrimitiveObjectData defines the base properties for primitive objects (e.g., spheres, boxes) in the simulation.
Attributes:
----------
collision : bool
Flag indicating whether the object participates in collision detection. Default is True.
visual : bool
Flag indicating whether the object has a visual representation in the simulation. Default is True.
color : tuple[float, float, float, float]
The color of the object, represented in RGBA format. Default is (0.8, 0.8, 0.8, 1.0).
mass : float
The mass of the object. Default is 0.1.
"""
collision: bool = field(default=True)
visual: bool = field(default=True)
mass: float = field(default=0.1)
@dataclass
class SphereObjectData(PrimitiveObjectData):
"""
SphereObjectData defines the specific properties for a spherical object in the simulation.
Attributes:
----------
radius : float
The radius of the sphere. Default is 0.025.
friction : float
The friction coefficient of the sphere when in contact with surfaces. Default is 1.0.
"""
radius: float = field(default=0.025)
friction: float = field(default=1.0)
@dataclass
class PlaneObjectData(PrimitiveObjectData):
"""
PlaneObjectData defines the specific properties for a planar object in the simulation.
Attributes:
----------
size : tuple[float, float]
The size of the plane, defined by its width and length. Default is (1.0, 1.0).
direction : tuple[float, float, float]
The normal vector representing the direction the plane faces. Default is (0.0, 0.0, 1.0).
friction : float
The friction coefficient of the plane when in contact with other objects. Default is 1.0.
"""
size: tuple = field(default=(1.0, 1.0))
direction: tuple = field(default=(0.0, 0.0, 1.0))
friction: float = field(default=1.0)
@dataclass
class CylinderObjectData(PrimitiveObjectData):
"""
CylinderObjectData defines the specific properties for a cylindrical object in the simulation.
Attributes:
----------
radius : float
The radius of the cylinder. Default is 0.025.
length : float
The length of the cylinder. Default is 0.1.
friction : float
The friction coefficient of the cylinder when in contact with surfaces. Default is 1.0.
"""
radius: float = field(default=0.025)
length: float = field(default=0.1)
friction: float = field(default=1.0)
@dataclass
class BoxObjectData(PrimitiveObjectData):
"""
BoxObjectData defines the specific properties for a box-shaped object in the simulation.
Attributes:
----------
size : tuple[float, float, float]
The dimensions of the box in (width, height, depth). Default is (0.05, 0.05, 0.05).
friction : float
The friction coefficient of the box when in contact with surfaces. Default is 1.0.
"""
size: tuple = field(default=(0.05, 0.05, 0.05))
friction: float = field(default=1.0)
@dataclass
class ModelData(ObjectData):
"""
MeshObjectData defines the specific properties for a mesh-based object in the simulation.
Attributes:
----------
texture : list[float]
A list of texture coordinates or texture properties applied to the mesh. Default is an empty list.
"""
texture: list[float] = field(default_factory=list)
@dataclass
class MeshData(ModelData):
mass: float = field(default_factory=float)
inertia: tuple[float, float, float, float, float, float] = field(
default_factory=tuple
)
cm: Pose = field(default_factory=Pose)
collision: str = field(default_factory=str)
visual: str = field(default_factory=str)
friction: float = field(default_factory=float)
density: float = field(default_factory=float)

View file

@ -1,103 +0,0 @@
from dataclasses import dataclass, field
from enum import Enum
@dataclass
class ToolData:
name: str = field(default_factory=str)
type: str = field(default_factory=str)
@dataclass
class GripperData(ToolData):
pass
@dataclass
class ParallelGripperData(GripperData):
pass
@dataclass
class MultifingerGripperData(GripperData):
pass
@dataclass
class VacuumGripperData(GripperData):
pass
class GripperEnum(Enum):
parallel = ParallelGripperData
mulrifinger = MultifingerGripperData
vacuum = VacuumGripperData
@dataclass
class RobotRandomizerData:
"""
RobotRandomizerData stores configuration parameters for randomizing robot properties during simulation.
Attributes:
----------
pose : bool
Flag indicating whether to randomize the robot's pose (position and orientation). Default is False.
spawn_volume : tuple[float, float, float]
The volume within which the robot can be spawned, defined by (x, y, z) dimensions. Default is (1.0, 1.0, 0.0).
joint_positions : bool
Flag indicating whether to randomize the robot's joint positions. Default is False.
joint_positions_std : float
The standard deviation for randomizing the robot's joint positions. Default is 0.1.
joint_positions_above_object_spawn : bool
Flag indicating whether the robot's joint positions should be randomized to place the robot above the object's spawn position. Default is False.
joint_positions_above_object_spawn_elevation : float
The elevation above the object's spawn position when placing the robot's joints. Default is 0.2.
joint_positions_above_object_spawn_xy_randomness : float
The randomness in the x and y coordinates when placing the robot's joints above the object's spawn position. Default is 0.2.
"""
pose: bool = field(default=False)
spawn_volume: tuple[float, float, float] = field(default=(1.0, 1.0, 0.0))
joint_positions: bool = field(default=False)
joint_positions_std: float = field(default=0.1)
joint_positions_above_object_spawn: bool = field(default=False)
joint_positions_above_object_spawn_elevation: float = field(default=0.2)
joint_positions_above_object_spawn_xy_randomness: float = field(default=0.2)
@dataclass
class RobotData:
"""
RobotData stores the base properties and configurations for a robot in the simulation.
Attributes:
----------
name : str
The name of the robot. Default is an empty string.
urdf_string : str
Optional parameter that can store a URDF. This parameter is overridden by the node parameters if set in the node configuration.
spawn_position : tuple[float, float, float]
The position where the robot will be spawned in (x, y, z) coordinates. Default is (0.0, 0.0, 0.0).
spawn_quat_xyzw : tuple[float, float, float, float]
The orientation of the robot in quaternion format (x, y, z, w) at spawn. Default is (0.0, 0.0, 0.0, 1.0).
joint_positions : list[float]
A list of the robot's joint positions. Default is an empty list.
with_gripper : bool
Flag indicating whether the robot is equipped with a gripper. Default is False.
gripper_joint_positions : list[float] | float
The joint positions for the gripper. Can be a list of floats or a single float. Default is an empty list.
randomizer : RobotRandomizerData
The randomization settings for the robot, allowing various properties to be randomized in simulation. Default is an instance of RobotRandomizerData.
"""
name: str = field(default_factory=str)
urdf_string: str = field(default_factory=str)
spawn_position: tuple[float, float, float] = field(default=(0.0, 0.0, 0.0))
spawn_quat_xyzw: tuple[float, float, float, float] = field(
default=(0.0, 0.0, 0.0, 1.0)
)
joint_positions: list[float] = field(default_factory=list)
with_gripper: bool = field(default=False)
gripper_joint_positions: list[float] | float = field(default_factory=list)
randomizer: RobotRandomizerData = field(default_factory=RobotRandomizerData)

View file

@ -1,68 +0,0 @@
from dataclasses import dataclass, field
from .camera import CameraData
from .light import LightData
from .objects import ObjectData
from .robot import RobotData
from .terrain import TerrainData
@dataclass
class PluginsData:
"""
PluginsData stores the configuration for various plugins used in the simulation environment.
Attributes:
----------
scene_broadcaster : bool
Flag indicating whether the scene broadcaster plugin is enabled. Default is False.
user_commands : bool
Flag indicating whether the user commands plugin is enabled. Default is False.
fts_broadcaster : bool
Flag indicating whether the force torque sensor (FTS) broadcaster plugin is enabled. Default is False.
sensors_render_engine : str
The rendering engine used for sensors. Default is "ogre2".
"""
scene_broadcaster: bool = field(default_factory=bool)
user_commands: bool = field(default_factory=bool)
fts_broadcaster: bool = field(default_factory=bool)
sensors_render_engine: str = field(default="ogre2")
@dataclass
class SceneData:
"""
SceneData contains the configuration and settings for the simulation scene.
Attributes:
----------
physics_rollouts_num : int
The number of physics rollouts to perform in the simulation. Default is 0.
gravity : tuple[float, float, float]
The gravitational acceleration vector applied in the scene, represented as (x, y, z). Default is (0.0, 0.0, -9.80665).
gravity_std : tuple[float, float, float]
The standard deviation for the gravitational acceleration, represented as (x, y, z). Default is (0.0, 0.0, 0.0232).
robot : RobotData
The configuration data for the robot present in the scene. Default is an instance of RobotData.
terrain : TerrainData
The configuration data for the terrain in the scene. Default is an instance of TerrainData.
light : LightData
The configuration data for the lighting in the scene. Default is an instance of LightData.
objects : list[ObjectData]
A list of objects present in the scene, represented by their ObjectData configurations. Default is an empty list.
camera : list[CameraData]
A list of cameras in the scene, represented by their CameraData configurations. Default is an empty list.
plugins : PluginsData
The configuration data for various plugins utilized in the simulation environment. Default is an instance of PluginsData.
"""
physics_rollouts_num: int = field(default=0)
gravity: tuple[float, float, float] = field(default=(0.0, 0.0, -9.80665))
gravity_std: tuple[float, float, float] = field(default=(0.0, 0.0, 0.0232))
robot: RobotData = field(default_factory=RobotData)
terrain: TerrainData = field(default_factory=TerrainData)
light: LightData = field(default_factory=LightData)
objects: list[ObjectData] = field(default_factory=list)
camera: list[CameraData] = field(default_factory=list)
plugins: PluginsData = field(default_factory=PluginsData)

View file

@ -1,33 +0,0 @@
from dataclasses import dataclass, field
@dataclass
class TerrainData:
"""
TerrainData stores the configuration for the terrain in the simulation environment.
Attributes:
----------
name : str
The name of the terrain. Default is "ground".
type : str
The type of terrain (e.g., "flat", "hilly", "uneven"). Default is "flat".
spawn_position : tuple[float, float, float]
The position where the terrain will be spawned in the simulation, represented as (x, y, z). Default is (0, 0, 0).
spawn_quat_xyzw : tuple[float, float, float, float]
The orientation of the terrain at spawn, represented as a quaternion (x, y, z, w). Default is (0, 0, 0, 1).
size : tuple[float, float]
The size of the terrain, represented as (width, length). Default is (1.5, 1.5).
model_rollouts_num : int
The number of rollouts for randomizing terrain models. Default is 1.
"""
name: str = field(default="ground")
type: str = field(default="flat")
spawn_position: tuple[float, float, float] = field(default=(0, 0, 0))
spawn_quat_xyzw: tuple[float, float, float, float] = field(default=(0, 0, 0, 1))
size: tuple[float, float] = field(default=(1.5, 1.5))
ambient: tuple[float, float, float, float] = field(default=(0.8, 0.8, 0.8, 1.0))
specular: tuple[float, float, float, float] = field(default=(0.8, 0.8, 0.8, 1.0))
diffuse: tuple[float, float, float, float] = field(default=(0.8, 0.8, 0.8, 1.0))
model_rollouts_num: int = field(default=1)

View file

@ -1,34 +0,0 @@
from enum import Enum
from gym_gz.scenario.model_wrapper import ModelWrapper
from .random_sun import RandomSun
from .sun import Sun
# Enum для типов света
class LightType(Enum):
SUN = "sun"
RANDOM_SUN = "random_sun"
LIGHT_MODEL_CLASSES = {
LightType.SUN: Sun,
LightType.RANDOM_SUN: RandomSun,
}
def get_light_model_class(light_type: str) -> type[ModelWrapper]:
try:
light_enum = LightType(light_type)
return LIGHT_MODEL_CLASSES[light_enum]
except KeyError:
raise ValueError(f"Unsupported light type: {light_type}")
def is_light_type_randomizable(light_type: str) -> bool:
try:
light_enum = LightType(light_type)
return light_enum == LightType.RANDOM_SUN
except ValueError:
return False

View file

@ -1,240 +0,0 @@
import numpy as np
from gym_gz.scenario import model_wrapper
from gym_gz.utils.scenario import get_unique_model_name
from numpy.random import RandomState
from scenario import core as scenario
from scenario import core as scenario_core
from scenario import gazebo as scenario_gazebo
class RandomSun(model_wrapper.ModelWrapper):
"""
RandomSun is a class that creates a randomly positioned directional light (like the Sun) in a simulation world.
Attributes:
----------
world : scenario_gazebo.World
The simulation world where the model is inserted.
name : str
The name of the sun model. Default is "sun".
minmax_elevation : tuple[float, float]
Minimum and maximum values for the random elevation angle of the sun. Default is (-0.15, -0.65).
distance : float
Distance from the origin to the sun in the simulation. Default is 800.0.
visual : bool
Flag indicating whether the sun should have a visual sphere in the simulation. Default is True.
radius : float
Radius of the visual sphere representing the sun. Default is 20.0.
color_minmax_r : tuple[float, float]
Range for the red component of the sun's light color. Default is (1.0, 1.0).
color_minmax_g : tuple[float, float]
Range for the green component of the sun's light color. Default is (1.0, 1.0).
color_minmax_b : tuple[float, float]
Range for the blue component of the sun's light color. Default is (1.0, 1.0).
specular : float
The specular factor for the sun's light. Default is 1.0.
attenuation_minmax_range : tuple[float, float]
Range for the light attenuation (falloff) distance. Default is (750.0, 15000.0).
attenuation_minmax_constant : tuple[float, float]
Range for the constant component of the light attenuation. Default is (0.5, 1.0).
attenuation_minmax_linear : tuple[float, float]
Range for the linear component of the light attenuation. Default is (0.001, 0.1).
attenuation_minmax_quadratic : tuple[float, float]
Range for the quadratic component of the light attenuation. Default is (0.0001, 0.01).
np_random : RandomState | None
Random state for generating random values. Default is None, in which case a new random generator is created.
Raises:
-------
RuntimeError:
Raised if the sun model fails to be inserted into the world.
Methods:
--------
get_sdf():
Generates the SDF string for the sun model.
"""
def __init__(
self,
world: scenario_gazebo.World,
name: str = "sun",
minmax_elevation: tuple[float, float] = (-0.15, -0.65),
distance: float = 800.0,
visual: bool = True,
radius: float = 20.0,
color_minmax_r: tuple[float, float] = (1.0, 1.0),
color_minmax_g: tuple[float, float] = (1.0, 1.0),
color_minmax_b: tuple[float, float] = (1.0, 1.0),
specular: float = 1.0,
attenuation_minmax_range: tuple[float, float] = (750.0, 15000.0),
attenuation_minmax_constant: tuple[float, float] = (0.5, 1.0),
attenuation_minmax_linear: tuple[float, float] = (0.001, 0.1),
attenuation_minmax_quadratic: tuple[float, float] = (0.0001, 0.01),
np_random: RandomState | None = None,
**kwargs,
):
if np_random is None:
np_random = np.random.default_rng()
# Get a unique model name
model_name = get_unique_model_name(world, name)
# Get random yaw direction
direction = np_random.uniform(-1.0, 1.0, (2,))
# Normalize yaw direction
direction = direction / np.linalg.norm(direction)
# Get random elevation
direction = np.append(
direction,
np_random.uniform(minmax_elevation[0], minmax_elevation[1]),
)
# Normalize again
direction = direction / np.linalg.norm(direction)
# Initial pose
initial_pose = scenario_core.Pose(
(
-direction[0] * distance,
-direction[1] * distance,
-direction[2] * distance,
),
(1, 0, 0, 0),
)
# Create SDF string for the model
sdf = self.get_sdf(
model_name=model_name,
direction=tuple(direction),
visual=visual,
radius=radius,
color_minmax_r=color_minmax_r,
color_minmax_g=color_minmax_g,
color_minmax_b=color_minmax_b,
attenuation_minmax_range=attenuation_minmax_range,
attenuation_minmax_constant=attenuation_minmax_constant,
attenuation_minmax_linear=attenuation_minmax_linear,
attenuation_minmax_quadratic=attenuation_minmax_quadratic,
specular=specular,
np_random=np_random,
)
# Insert the model
ok_model = world.to_gazebo().insert_model_from_string(
sdf, initial_pose, model_name
)
if not ok_model:
raise RuntimeError("Failed to insert " + model_name)
# Get the model
model = world.get_model(model_name)
# Initialize base class
model_wrapper.ModelWrapper.__init__(self, model=model)
@classmethod
def get_sdf(
self,
model_name: str,
direction: tuple[float, float, float],
visual: bool,
radius: float,
color_minmax_r: tuple[float, float],
color_minmax_g: tuple[float, float],
color_minmax_b: tuple[float, float],
attenuation_minmax_range: tuple[float, float],
attenuation_minmax_constant: tuple[float, float],
attenuation_minmax_linear: tuple[float, float],
attenuation_minmax_quadratic: tuple[float, float],
specular: float,
np_random: RandomState,
) -> str:
"""
Generates the SDF string for the sun model.
Args:
-----
model_name : str
The name of the model.
direction : Tuple[float, float, float]
The direction of the sun's light.
visual : bool
If True, a visual representation of the sun will be created.
radius : float
The radius of the visual representation.
color_minmax_r : Tuple[float, float]
Range for the red component of the light color.
color_minmax_g : Tuple[float, float]
Range for the green component of the light color.
color_minmax_b : Tuple[float, float]
Range for the blue component of the light color.
attenuation_minmax_range : Tuple[float, float]
Range for light attenuation distance.
attenuation_minmax_constant : Tuple[float, float]
Range for the constant attenuation factor.
attenuation_minmax_linear : Tuple[float, float]
Range for the linear attenuation factor.
attenuation_minmax_quadratic : Tuple[float, float]
Range for the quadratic attenuation factor.
specular : float
The specular reflection factor for the light.
np_random : RandomState
The random number generator used to sample random values for the parameters.
Returns:
--------
str:
The SDF string for the sun model.
"""
# Sample random values for parameters
color_r = np_random.uniform(color_minmax_r[0], color_minmax_r[1])
color_g = np_random.uniform(color_minmax_g[0], color_minmax_g[1])
color_b = np_random.uniform(color_minmax_b[0], color_minmax_b[1])
attenuation_range = np_random.uniform(
attenuation_minmax_range[0], attenuation_minmax_range[1]
)
attenuation_constant = np_random.uniform(
attenuation_minmax_constant[0], attenuation_minmax_constant[1]
)
attenuation_linear = np_random.uniform(
attenuation_minmax_linear[0], attenuation_minmax_linear[1]
)
attenuation_quadratic = np_random.uniform(
attenuation_minmax_quadratic[0], attenuation_minmax_quadratic[1]
)
return f'''<sdf version="1.9">
<model name="{model_name}">
<static>true</static>
<link name="{model_name}_link">
<light type="directional" name="{model_name}_light">
<direction>{direction[0]} {direction[1]} {direction[2]}</direction>
<attenuation>
<range>{attenuation_range}</range>
<constant>{attenuation_constant}</constant>
<linear>{attenuation_linear}</linear>
<quadratic>{attenuation_quadratic}</quadratic>
</attenuation>
<diffuse>{color_r} {color_g} {color_b} 1</diffuse>
<specular>{specular*color_r} {specular*color_g} {specular*color_b} 1</specular>
<cast_shadows>true</cast_shadows>
</light>
{
f"""
<visual name="{model_name}_visual">
<geometry>
<sphere>
<radius>{radius}</radius>
</sphere>
</geometry>
<material>
<emissive>{color_r} {color_g} {color_b} 1</emissive>
</material>
<cast_shadows>false</cast_shadows>
</visual>
""" if visual else ""
}
</link>
</model>
</sdf>'''

View file

@ -1,191 +0,0 @@
import numpy as np
from gym_gz.scenario import model_wrapper
from gym_gz.utils.scenario import get_unique_model_name
from scenario import core as scenario_core
from scenario import gazebo as scenario_gazebo
class Sun(model_wrapper.ModelWrapper):
"""
Sun is a class that represents a directional light source in the simulation, similar to the Sun.
It can have both visual and light properties, with customizable parameters such as color, direction, and attenuation.
Attributes:
----------
world : scenario_gazebo.World
The Gazebo world where the sun model will be inserted.
name : str
The name of the sun model. Default is "sun".
direction : tuple[float, float, float]
The direction of the sunlight, normalized. Default is (0.5, -0.25, -0.75).
color : tuple[float, float, float, float]
The RGBA color values for the light's diffuse color. Default is (1.0, 1.0, 1.0, 1.0).
distance : float
The distance from the origin to the sun. Default is 800.0.
visual : bool
If True, a visual representation of the sun will be added. Default is True.
radius : float
The radius of the visual representation of the sun. Default is 20.0.
specular : float
The intensity of the specular reflection. Default is 1.0.
attenuation_range : float
The maximum range for the light attenuation. Default is 10000.0.
attenuation_constant : float
The constant attenuation factor. Default is 0.9.
attenuation_linear : float
The linear attenuation factor. Default is 0.01.
attenuation_quadratic : float
The quadratic attenuation factor. Default is 0.001.
Raises:
-------
RuntimeError:
If the sun model fails to be inserted into the Gazebo world.
Methods:
--------
get_sdf() -> str:
Generates the SDF string used to describe the sun model in the Gazebo simulation.
"""
def __init__(
self,
world: scenario_gazebo.World,
name: str = "sun",
direction: tuple[float, float, float] = (0.5, -0.25, -0.75),
color: tuple[float, float, float, float] = (1.0, 1.0, 1.0, 1.0),
distance: float = 800.0,
visual: bool = True,
radius: float = 20.0,
specular: float = 1.0,
attenuation_range: float = 10000.0,
attenuation_constant: float = 0.9,
attenuation_linear: float = 0.01,
attenuation_quadratic: float = 0.001,
**kwargs,
):
# Get a unique model name
model_name = get_unique_model_name(world, name)
# Normalize direction
np_direction = np.array(direction)
np_direction = np_direction / np.linalg.norm(np_direction)
# Initial pose
initial_pose = scenario_core.Pose(
(
-np_direction[0] * distance,
-np_direction[1] * distance,
-np_direction[2] * distance,
),
(1, 0, 0, 0),
)
# Create SDF string for the model
sdf = self.get_sdf(
model_name=model_name,
direction=tuple(np_direction),
color=color,
visual=visual,
radius=radius,
specular=specular,
attenuation_range=attenuation_range,
attenuation_constant=attenuation_constant,
attenuation_linear=attenuation_linear,
attenuation_quadratic=attenuation_quadratic,
)
# Insert the model
ok_model = world.to_gazebo().insert_model_from_string(
sdf, initial_pose, model_name
)
if not ok_model:
raise RuntimeError("Failed to insert " + model_name)
# Get the model
model = world.get_model(model_name)
# Initialize base class
model_wrapper.ModelWrapper.__init__(self, model=model)
@classmethod
def get_sdf(
cls,
model_name: str,
direction: tuple[float, float, float],
color: tuple[float, float, float, float],
visual: bool,
radius: float,
specular: float,
attenuation_range: float,
attenuation_constant: float,
attenuation_linear: float,
attenuation_quadratic: float,
) -> str:
"""
Generates the SDF string for the sun model.
Args:
-----
model_name : str
The name of the sun model.
direction : tuple[float, float, float]
The direction vector for the sunlight.
color : tuple[float, float, float, float]
The RGBA color values for the sunlight.
visual : bool
Whether to include a visual representation of the sun (a sphere).
radius : float
The radius of the visual sphere.
specular : float
The specular reflection intensity.
attenuation_range : float
The range of the light attenuation.
attenuation_constant : float
The constant factor for the light attenuation.
attenuation_linear : float
The linear factor for the light attenuation.
attenuation_quadratic : float
The quadratic factor for the light attenuation.
Returns:
--------
str:
The SDF string for the sun model.
"""
return f'''<sdf version="1.9">
<model name="{model_name}">
<static>true</static>
<link name="{model_name}_link">
<light type="directional" name="{model_name}_light">
<direction>{direction[0]} {direction[1]} {direction[2]}</direction>
<attenuation>
<range>{attenuation_range}</range>
<constant>{attenuation_constant}</constant>
<linear>{attenuation_linear}</linear>
<quadratic>{attenuation_quadratic}</quadratic>
</attenuation>
<diffuse>{color[0]} {color[1]} {color[2]} 1</diffuse>
<specular>{specular*color[0]} {specular*color[1]} {specular*color[2]} 1</specular>
<cast_shadows>true</cast_shadows>
</light>
{
f"""
<visual name="{model_name}_visual">
<geometry>
<sphere>
<radius>{radius}</radius>
</sphere>
</geometry>
<material>
<emissive>{color[0]} {color[1]} {color[2]} 1</emissive>
</material>
<cast_shadows>false</cast_shadows>
</visual>
""" if visual else ""
}
</link>
</model>
</sdf>'''

View file

@ -1,53 +0,0 @@
from enum import Enum
from gym_gz.scenario.model_wrapper import ModelWrapper
from .model import Model
from .mesh import Mesh
from .primitives import Box, Cylinder, Plane, Sphere
from .random_object import RandomObject
from .random_primitive import RandomPrimitive
class ObjectType(Enum):
BOX = "box"
SPHERE = "sphere"
CYLINDER = "cylinder"
PLANE = "plane"
RANDOM_PRIMITIVE = "random_primitive"
RANDOM_MESH = "random_mesh"
MODEL = "model"
MESH = "mesh"
OBJECT_MODEL_CLASSES = {
ObjectType.BOX: Box,
ObjectType.SPHERE: Sphere,
ObjectType.CYLINDER: Cylinder,
ObjectType.PLANE: Plane,
ObjectType.RANDOM_PRIMITIVE: RandomPrimitive,
ObjectType.RANDOM_MESH: RandomObject,
ObjectType.MODEL: Model,
ObjectType.MESH: Mesh
}
RANDOMIZABLE_TYPES = {
ObjectType.RANDOM_PRIMITIVE,
ObjectType.RANDOM_MESH,
}
def get_object_model_class(object_type: str) -> type[ModelWrapper]:
try:
object_enum = ObjectType(object_type)
return OBJECT_MODEL_CLASSES[object_enum]
except KeyError:
raise ValueError(f"Unsupported object type: {object_type}")
def is_object_type_randomizable(object_type: str) -> bool:
try:
object_enum = ObjectType(object_type)
return object_enum in RANDOMIZABLE_TYPES
except ValueError:
return False

View file

@ -1,183 +0,0 @@
import numpy as np
import trimesh
from gym_gz.scenario import model_wrapper
from gym_gz.utils.scenario import get_unique_model_name
from scenario import core as scenario_core
from scenario import gazebo as scenario_gazebo
from trimesh import Trimesh
from env_manager.utils.types import Point
InertiaTensor = tuple[float, float, float, float, float, float]
class Mesh(model_wrapper.ModelWrapper):
""" """
def __init__(
self,
world: scenario_gazebo.World,
name: str = "object",
type: str = "mesh",
relative_to: str = "world",
position: tuple[float, float, float] = (0, 0, 0),
orientation: tuple[float, float, float, float] = (1, 0, 0, 0),
color: tuple[float, float, float, float] | None = None,
static: bool = False,
texture: list[float] = [],
mass: float = 0.0,
inertia: InertiaTensor | None = None,
cm: Point | None = None,
collision: str = "",
visual: str = "",
friction: float = 0.0,
density: float = 0.0,
**kwargs,
):
# Get a unique model name
model_name = get_unique_model_name(world, name)
# Initial pose
initial_pose = scenario_core.Pose(position, orientation)
# Calculate inertia parameters for provided mesh file if not exist
if not inertia and collision and density:
mass, cm, inertia = self.calculate_inertia(collision, density)
else:
raise RuntimeError(
f"Please provide collision mesh filepath for model {name} to calculate inertia"
)
if not color:
color = tuple(np.random.uniform(0.0, 1.0, (3,)))
color = (color[0], color[1], color[2], 1.0)
# Create SDF string for the model
sdf = self.get_sdf(
name=name,
static=static,
collision=collision,
visual=visual,
mass=mass,
inertia=inertia,
friction=friction,
color=color,
center_of_mass=cm,
gui_only=True,
)
# Insert the model
ok_model = world.to_gazebo().insert_model_from_string(
sdf, initial_pose, model_name
)
if not ok_model:
raise RuntimeError(f"Failed to insert {model_name}")
# Get the model
model = world.get_model(model_name)
# Initialize base class
super().__init__(model=model)
def calculate_inertia(self, file_path, density):
mesh = trimesh.load(file_path)
if not isinstance(mesh, Trimesh):
raise RuntimeError("Please provide correct stl mesh filepath")
volume = mesh.volume
mass: float = volume * density
center_of_mass: Point = tuple(mesh.center_mass)
inertia = mesh.moment_inertia
eigenvalues = np.linalg.eigvals(inertia)
inertia_tensor: InertiaTensor = (
inertia[0, 0],
inertia[0, 1],
inertia[0, 2],
inertia[1, 1],
inertia[1, 2],
inertia[2, 2],
)
return mass, center_of_mass, inertia_tensor
@classmethod
def get_sdf(
cls,
name: str,
static: bool,
collision: str,
visual: str,
mass: float,
inertia: InertiaTensor,
friction: float,
color: tuple[float, float, float, float],
center_of_mass: Point,
gui_only: bool,
) -> str:
"""
Generates the SDF string for the box model.
Args:
- mesh_args (MeshPureData): Object that contain data of provided mesh data
Returns:
The SDF string that defines the box model in Gazebo.
"""
return f'''<sdf version="1.7">
<model name="{name}">
<static>{"true" if static else "false"}</static>
<link name="{name}_link">
{
f"""
<collision name="{name}_collision">
<geometry>
<mesh>
<uri>{collision}</uri>
</mesh>
</geometry>
<surface>
<friction>
<ode>
<mu>{friction}</mu>
<mu2>{friction}</mu2>
<fdir1>0 0 0</fdir1>
<slip1>0.0</slip1>
<slip2>0.0</slip2>
</ode>
</friction>
</surface>
</collision>
""" if collision else ""
}
{
f"""
<visual name="{name}_visual">
<geometry>
<mesh>
<uri>{visual}</uri>
</mesh>
</geometry>
<material>
<ambient>{color[0]} {color[1]} {color[2]} {color[3]}</ambient>
<diffuse>{color[0]} {color[1]} {color[2]} {color[3]}</diffuse>
<specular>{color[0]} {color[1]} {color[2]} {color[3]}</specular>
</material>
<transparency>{1.0 - color[3]}</transparency>
{'<visibility_flags>1</visibility_flags> <cast_shadows>false</cast_shadows>' if gui_only else ''}
</visual>
""" if visual else ""
}
<inertial>
<mass>{mass}</mass>
<pose>{center_of_mass[0]} {center_of_mass[1]} {center_of_mass[2]} 0 0 0</pose>
<inertia>
<ixx>{inertia[0]}</ixx>
<ixy>{inertia[1]}</ixy>
<ixz>{inertia[2]}</ixz>
<iyy>{inertia[3]}</iyy>
<iyz>{inertia[4]}</iyz>
<izz>{inertia[5]}</izz>
</inertia>
</inertial>
</link>
</model>
</sdf>'''

View file

@ -1,75 +0,0 @@
from gym_gz.scenario import model_wrapper
from gym_gz.utils.scenario import get_unique_model_name
from rbs_assets_library import get_model_file
from scenario import core as scenario_core
from scenario import gazebo as scenario_gazebo
class Model(model_wrapper.ModelWrapper):
"""
Represents a 3D mesh model in the Gazebo simulation environment.
This class is responsible for loading and inserting a mesh model
into the Gazebo world with specified attributes.
Args:
world (scenario_gazebo.World): The Gazebo world where the mesh model will be inserted.
name (str, optional): The name of the mesh model. Defaults to "object".
position (tuple[float, float, float], optional): The position of the mesh in the world. Defaults to (0, 0, 0).
orientation (tuple[float, float, float, float], optional): The orientation of the mesh in quaternion format. Defaults to (1, 0, 0, 0).
gui_only (bool, optional): If True, the visual representation of the mesh will only appear in the GUI. Defaults to False.
**kwargs: Additional keyword arguments.
Raises:
RuntimeError: If the mesh model fails to be inserted into the Gazebo world.
Methods:
get_sdf(model_name: str) -> str:
Generates the SDF string used to describe the mesh model in the Gazebo simulation.
"""
def __init__(
self,
world: scenario_gazebo.World,
name: str = "object",
position: tuple[float, float, float] = (0, 0, 0),
orientation: tuple[float, float, float, float] = (1, 0, 0, 0),
gui_only: bool = False,
**kwargs,
):
# Get a unique model name
model_name = get_unique_model_name(world, name)
# Initial pose
initial_pose = scenario_core.Pose(position, orientation)
# Create SDF string for the model
sdf = self.get_sdf(
model_name=name,
)
# Insert the model
ok_model = world.to_gazebo().insert_model(sdf, initial_pose, model_name)
if not ok_model:
raise RuntimeError("Failed to insert " + model_name)
# Get the model
model = world.get_model(model_name)
# Initialize base class
super().__init__(model=model)
@classmethod
def get_sdf(
cls,
model_name: str,
) -> str:
"""
Generates the SDF string for the specified mesh model.
Args:
model_name (str): The name of the mesh model to generate the SDF for.
Returns:
str: The SDF string that defines the mesh model in Gazebo.
"""
return get_model_file(model_name)

View file

@ -1,4 +0,0 @@
from .box import Box
from .cylinder import Cylinder
from .plane import Plane
from .sphere import Sphere

View file

@ -1,176 +0,0 @@
import numpy as np
from gym_gz.scenario import model_wrapper
from gym_gz.utils.scenario import get_unique_model_name
from scenario import core as scenario_core
from scenario import gazebo as scenario_gazebo
class Box(model_wrapper.ModelWrapper):
"""
The Box class represents a 3D box model in the Gazebo simulation environment.
It includes physical and visual properties such as size, mass, color, and collision properties.
Attributes:
world (scenario_gazebo.World): The Gazebo world where the box model will be inserted.
name (str): The name of the box model. Default is "box".
position (tuple[float, float, float]): The position of the box in the world. Default is (0, 0, 0).
orientation (tuple[float, float, float, float]): The orientation of the box in quaternion format. Default is (1, 0, 0, 0).
size (tuple[float, float, float]): The size of the box (width, length, height). Default is (0.05, 0.05, 0.05).
mass (float): The mass of the box in kilograms. Default is 0.1.
static (bool): If True, the box will be immovable and static. Default is False.
collision (bool): If True, the box will have collision properties. Default is True.
friction (float): The friction coefficient for the boxs surface. Default is 1.0.
visual (bool): If True, the box will have a visual representation. Default is True.
gui_only (bool): If True, the visual representation of the box will only appear in the GUI, not in the simulation physics. Default is False.
color (tuple[float, float, float, float]): The RGBA color of the box. Default is (0.8, 0.8, 0.8, 1.0).
Raises:
RuntimeError:
If the box model fails to be inserted into the Gazebo world.
Methods:
--------
get_sdf() -> str:
Generates the SDF string used to describe the box model in the Gazebo simulation.
"""
def __init__(
self,
world: scenario_gazebo.World,
name: str = "box",
position: tuple[float, float, float] = (0, 0, 0),
orientation: tuple[float, float, float, float] = (1, 0, 0, 0),
size: tuple[float, float, float] = (0.05, 0.05, 0.05),
mass: float = 0.1,
static: bool = False,
collision: bool = True,
friction: float = 1.0,
visual: bool = True,
gui_only: bool = False,
color: tuple[float, float, float, float] | None = None,
**kwargs,
):
# Get a unique model name
model_name = get_unique_model_name(world, name)
# Initial pose
initial_pose = scenario_core.Pose(position, orientation)
if not color:
color = tuple(np.random.uniform(0.0, 1.0, (3,)))
color = (color[0], color[1], color[2], 1.0)
# Create SDF string for the model
sdf = self.get_sdf(
model_name=model_name,
size=size,
mass=mass,
static=static,
collision=collision,
friction=friction,
visual=visual,
gui_only=gui_only,
color=color,
)
# Insert the model
ok_model = world.to_gazebo().insert_model_from_string(
sdf, initial_pose, model_name
)
if not ok_model:
raise RuntimeError(f"Failed to insert {model_name}")
# Get the model
model = world.get_model(model_name)
# Initialize base class
super().__init__(model=model)
@classmethod
def get_sdf(
cls,
model_name: str,
size: tuple[float, float, float],
mass: float,
static: bool,
collision: bool,
friction: float,
visual: bool,
gui_only: bool,
color: tuple[float, float, float, float],
) -> str:
"""
Generates the SDF string for the box model.
Args:
model_name (str): The name of the box model.
size (tuple[float, float, float]): The dimensions of the box (width, length, height).
mass (float): The mass of the box.
static (bool): If True, the box will be static and immovable.
collision (bool): If True, the box will have collision properties.
friction (float): The friction coefficient for the box.
visual (bool): If True, the box will have a visual representation.
gui_only (bool): If True, the box's visual representation will only appear in the GUI and will not affect physics.
color (tuple[float, float, float, float]): The RGBA color of the box.
Returns:
The SDF string that defines the box model in Gazebo.
"""
return f'''<sdf version="1.7">
<model name="{model_name}">
<static>{"true" if static else "false"}</static>
<link name="{model_name}_link">
{
f"""
<collision name="{model_name}_collision">
<geometry>
<box>
<size>{size[0]} {size[1]} {size[2]}</size>
</box>
</geometry>
<surface>
<friction>
<ode>
<mu>{friction}</mu>
<mu2>{friction}</mu2>
<fdir1>0 0 0</fdir1>
<slip1>0.0</slip1>
<slip2>0.0</slip2>
</ode>
</friction>
</surface>
</collision>
""" if collision else ""
}
{
f"""
<visual name="{model_name}_visual">
<geometry>
<box>
<size>{size[0]} {size[1]} {size[2]}</size>
</box>
</geometry>
<material>
<ambient>{color[0]} {color[1]} {color[2]} {color[3]}</ambient>
<diffuse>{color[0]} {color[1]} {color[2]} {color[3]}</diffuse>
<specular>{color[0]} {color[1]} {color[2]} {color[3]}</specular>
</material>
<transparency>{1.0 - color[3]}</transparency>
{'<visibility_flags>1</visibility_flags> <cast_shadows>false</cast_shadows>' if gui_only else ''}
</visual>
""" if visual else ""
}
<inertial>
<mass>{mass}</mass>
<inertia>
<ixx>{(size[1]**2 + size[2]**2) * mass / 12}</ixx>
<iyy>{(size[0]**2 + size[2]**2) * mass / 12}</iyy>
<izz>{(size[0]**2 + size[1]**2) * mass / 12}</izz>
<ixy>0.0</ixy>
<ixz>0.0</ixz>
<iyz>0.0</iyz>
</inertia>
</inertial>
</link>
</model>
</sdf>'''

View file

@ -1,173 +0,0 @@
import numpy as np
from gym_gz.scenario import model_wrapper
from gym_gz.utils.scenario import get_unique_model_name
from scenario import core as scenario_core
from scenario import gazebo as scenario_gazebo
class Cylinder(model_wrapper.ModelWrapper):
"""
The Cylinder class represents a cylindrical model in the Gazebo simulation environment.
Attributes:
world (scenario_gazebo.World): The Gazebo world where the cylinder model will be inserted.
name (str): The name of the cylinder model. Default is "cylinder".
position (tuple[float, float, float]): The position of the cylinder in the world. Default is (0, 0, 0).
orientation (tuple[float, float, float, float]): The orientation of the cylinder in quaternion format. Default is (1, 0, 0, 0).
radius (float): The radius of the cylinder. Default is 0.025.
length (float): The length/height of the cylinder. Default is 0.1.
mass (float): The mass of the cylinder in kilograms. Default is 0.1.
static (bool): If True, the cylinder will be immovable. Default is False.
collision (bool): If True, the cylinder will have collision properties. Default is True.
friction (float): The friction coefficient for the cylinder's surface. Default is 1.0.
visual (bool): If True, the cylinder will have a visual representation. Default is True.
gui_only (bool): If True, the visual representation of the cylinder will only appear in the GUI, not in the simulation physics. Default is False.
color (tuple[float, float, float, float]): The RGBA color of the cylinder. Default is (0.8, 0.8, 0.8, 1.0).
Raises:
RuntimeError: If the cylinder model fails to be inserted into the Gazebo world.
"""
def __init__(
self,
world: scenario_gazebo.World,
name: str = "cylinder",
position: tuple[float, float, float] = (0, 0, 0),
orientation: tuple[float, float, float, float] = (1, 0, 0, 0),
radius: float = 0.025,
length: float = 0.1,
mass: float = 0.1,
static: bool = False,
collision: bool = True,
friction: float = 1.0,
visual: bool = True,
gui_only: bool = False,
color: tuple[float, float, float, float] | None = None,
**kwargs,
):
model_name = get_unique_model_name(world, name)
initial_pose = scenario_core.Pose(position, orientation)
if not color:
color = tuple(np.random.uniform(0.0, 1.0, (3,)))
color = (color[0], color[1], color[2], 1.0)
sdf = self.get_sdf(
model_name=model_name,
radius=radius,
length=length,
mass=mass,
static=static,
collision=collision,
friction=friction,
visual=visual,
gui_only=gui_only,
color=color,
)
ok_model = world.to_gazebo().insert_model_from_string(
sdf, initial_pose, model_name
)
if not ok_model:
raise RuntimeError(f"Failed to insert {model_name}")
model = world.get_model(model_name)
super().__init__(model=model)
@classmethod
def get_sdf(
cls,
model_name: str,
radius: float,
length: float,
mass: float,
static: bool,
collision: bool,
friction: float,
visual: bool,
gui_only: bool,
color: tuple[float, float, float, float],
) -> str:
"""
Generates the SDF string for the cylinder model.
Args:
model_name (str): The name of the model.
radius (float): The radius of the cylinder.
length (float): The length or height of the cylinder.
mass (float): The mass of the cylinder in kilograms.
static (bool): If True, the cylinder will remain immovable in the simulation.
collision (bool): If True, adds collision properties to the cylinder.
friction (float): The friction coefficient for the cylinder's surface.
visual (bool): If True, a visual representation of the cylinder will be created.
gui_only (bool): If True, the visual representation will only appear in the GUI, without impacting the simulation's physics.
color (tuple[float, float, float, float]): The RGBA color of the cylinder, where each value is between 0 and 1.
Returns:
str: The SDF string representing the cylinder model
"""
inertia_xx_yy = (3 * radius**2 + length**2) * mass / 12
return f'''<sdf version="1.7">
<model name="{model_name}">
<static>{"true" if static else "false"}</static>
<link name="{model_name}_link">
{
f"""
<collision name="{model_name}_collision">
<geometry>
<cylinder>
<radius>{radius}</radius>
<length>{length}</length>
</cylinder>
</geometry>
<surface>
<friction>
<ode>
<mu>{friction}</mu>
<mu2>{friction}</mu2>
<fdir1>0 0 0</fdir1>
<slip1>0.0</slip1>
<slip2>0.0</slip2>
</ode>
</friction>
</surface>
</collision>
""" if collision else ""
}
{
f"""
<visual name="{model_name}_visual">
<geometry>
<cylinder>
<radius>{radius}</radius>
<length>{length}</length>
</cylinder>
</geometry>
<material>
<ambient>{color[0]} {color[1]} {color[2]} {color[3]}</ambient>
<diffuse>{color[0]} {color[1]} {color[2]} {color[3]}</diffuse>
<specular>{color[0]} {color[1]} {color[2]} {color[3]}</specular>
</material>
<transparency>{1.0-color[3]}</transparency>
{'<visibility_flags>1</visibility_flags> <cast_shadows>false</cast_shadows>' if gui_only else ''}
</visual>
""" if visual else ""
}
<inertial>
<mass>{mass}</mass>
<inertia>
<ixx>{inertia_xx_yy}</ixx>
<iyy>{inertia_xx_yy}</iyy>
<izz>{(mass*radius**2)/2}</izz>
<ixy>0.0</ixy>
<ixz>0.0</ixz>
<iyz>0.0</iyz>
</inertia>
</inertial>
</link>
</model>
</sdf>'''

View file

@ -1,100 +0,0 @@
from gym_gz.scenario import model_wrapper
from gym_gz.utils.scenario import get_unique_model_name
from scenario import core as scenario_core
from scenario import gazebo as scenario_gazebo
class Plane(model_wrapper.ModelWrapper):
"""
The Plane class represents a plane model in the Gazebo simulation environment.
It allows for defining a flat surface with collision and visual properties, as well as its orientation and friction settings.
Attributes:
world (scenario_gazebo.World): The Gazebo world where the plane model will be inserted.
name (str): The name of the plane model. Default is "plane".
position (tuple[float, float, float]): The position of the plane in the world. Default is (0, 0, 0).
orientation (tuple[float, float, float, float]): The orientation of the plane in quaternion format. Default is (1, 0, 0, 0).
size (tuple[float, float]): The size of the plane along the x and y axes. Default is (1.0, 1.0).
direction (tuple[float, float, float]): The normal vector representing the plane's direction. Default is (0.0, 0.0, 1.0), representing a horizontal plane.
collision (bool): If True, the plane will have collision properties. Default is True.
friction (float): The friction coefficient for the plane's surface. Default is 1.0.
visual (bool): If True, the plane will have a visual representation. Default is True.
Raises:
RuntimeError: If the plane model fails to be inserted into the Gazebo world.
"""
def __init__(
self,
world: scenario_gazebo.World,
name: str = "plane",
position: tuple[float, float, float] = (0, 0, 0),
orientation: tuple[float, float, float, float] = (1, 0, 0, 0),
size: tuple[float, float] = (1.0, 1.0),
direction: tuple[float, float, float] = (0.0, 0.0, 1.0),
collision: bool = True,
friction: float = 1.0,
visual: bool = True,
**kwargs,
):
model_name = get_unique_model_name(world, name)
initial_pose = scenario_core.Pose(position, orientation)
sdf = f'''<sdf version="1.7">
<model name="{model_name}">
<static>true</static>
<link name="{model_name}_link">
{
f"""
<collision name="{model_name}_collision">
<geometry>
<plane>
<normal>{direction[0]} {direction[1]} {direction[2]}</normal>
<size>{size[0]} {size[1]}</size>
</plane>
</geometry>
<surface>
<friction>
<ode>
<mu>{friction}</mu>
<mu2>{friction}</mu2>
<fdir1>0 0 0</fdir1>
<slip1>0.0</slip1>
<slip2>0.0</slip2>
</ode>
</friction>
</surface>
</collision>
""" if collision else ""
}
{
f"""
<visual name="{model_name}_visual">
<geometry>
<plane>
<normal>{direction[0]} {direction[1]} {direction[2]}</normal>
<size>{size[0]} {size[1]}</size>
</plane>
</geometry>
<material>
<ambient>0.8 0.8 0.8 1</ambient>
<diffuse>0.8 0.8 0.8 1</diffuse>
<specular>0.8 0.8 0.8 1</specular>
</material>
</visual>
""" if visual else ""
}
</link>
</model>
</sdf>'''
ok_model = world.to_gazebo().insert_model_from_string(
sdf, initial_pose, model_name
)
if not ok_model:
raise RuntimeError(f"Failed to insert {model_name}")
model = world.get_model(model_name)
super().__init__(model=model)

View file

@ -1,176 +0,0 @@
import numpy as np
from gym_gz.scenario import model_wrapper
from gym_gz.utils.scenario import get_unique_model_name
from scenario import core as scenario_core
from scenario import gazebo as scenario_gazebo
class Sphere(model_wrapper.ModelWrapper):
"""
A Sphere model for Gazebo simulation.
This class represents a spherical object that can be inserted into a Gazebo world.
The sphere can be customized by specifying its physical properties, such as radius, mass,
collision parameters, friction, and visual attributes.
Attributes:
world (scenario_gazebo.World): The Gazebo world where the sphere model will be inserted.
name (str): The name of the sphere model. Defaults to "sphere".
position (tuple[float, float, float]): The position of the sphere in the world. Defaults to (0, 0, 0).
orientation (tuple[float, float, float, float]): The orientation of the sphere in quaternion format. Defaults to (1, 0, 0, 0).
radius (float): The radius of the sphere. Defaults to 0.025.
mass (float): The mass of the sphere. Defaults to 0.1.
static (bool): If True, the sphere will be static in the simulation. Defaults to False.
collision (bool): If True, the sphere will have collision properties. Defaults to True.
friction (float): The friction coefficient of the sphere's surface. Defaults to 1.0.
visual (bool): If True, the sphere will have a visual representation. Defaults to True.
gui_only (bool): If True, the visual will only appear in the GUI. Defaults to False.
color (tuple[float, float, float, float]): The RGBA color of the sphere. Defaults to (0.8, 0.8, 0.8, 1.0).
Raises:
RuntimeError: If the sphere fails to be inserted into the Gazebo world.
"""
def __init__(
self,
world: scenario_gazebo.World,
name: str = "sphere",
position: tuple[float, float, float] = (0, 0, 0),
orientation: tuple[float, float, float, float] = (1, 0, 0, 0),
radius: float = 0.025,
mass: float = 0.1,
static: bool = False,
collision: bool = True,
friction: float = 1.0,
visual: bool = True,
gui_only: bool = False,
color: tuple[float, float, float, float] | None = None,
**kwargs,
):
# Get a unique model name
model_name = get_unique_model_name(world, name)
# Initial pose
initial_pose = scenario_core.Pose(position, orientation)
if not color:
color = tuple(np.random.uniform(0.0, 1.0, (3,)))
color = (color[0], color[1], color[2], 1.0)
# Create SDF string for the model
sdf = self.get_sdf(
model_name=model_name,
radius=radius,
mass=mass,
static=static,
collision=collision,
friction=friction,
visual=visual,
gui_only=gui_only,
color=color,
)
# Insert the model
ok_model = world.to_gazebo().insert_model_from_string(
sdf, initial_pose, model_name
)
if not ok_model:
raise RuntimeError(f"Failed to insert {model_name}")
# Get the model
model = world.get_model(model_name)
# Initialize base class
super().__init__(model=model)
@classmethod
def get_sdf(
cls,
model_name: str,
radius: float,
mass: float,
static: bool,
collision: bool,
friction: float,
visual: bool,
gui_only: bool,
color: tuple[float, float, float, float],
) -> str:
"""
Generates the SDF (Simulation Description Format) string for the sphere model.
Args:
model_name (str): The name of the model.
radius (float): The radius of the sphere.
mass (float): The mass of the sphere.
static (bool): Whether the sphere is static in the simulation.
collision (bool): Whether the sphere should have collision properties.
friction (float): The friction coefficient for the sphere.
visual (bool): Whether the sphere should have a visual representation.
gui_only (bool): Whether the visual representation is only visible in the GUI.
color (tuple[float, float, float, float]): The RGBA color of the sphere.
Returns:
str: The SDF string representing the sphere.
"""
# Inertia is identical for all axes
inertia_xx_yy_zz = (mass * radius**2) * 2 / 5
return f'''<sdf version="1.7">
<model name="{model_name}">
<static>{"true" if static else "false"}</static>
<link name="{model_name}_link">
{
f"""
<collision name="{model_name}_collision">
<geometry>
<sphere>
<radius>{radius}</radius>
</sphere>
</geometry>
<surface>
<friction>
<ode>
<mu>{friction}</mu>
<mu2>{friction}</mu2>
<fdir1>0 0 0</fdir1>
<slip1>0.0</slip1>
<slip2>0.0</slip2>
</ode>
</friction>
</surface>
</collision>
""" if collision else ""
}
{
f"""
<visual name="{model_name}_visual">
<geometry>
<sphere>
<radius>{radius}</radius>
</sphere>
</geometry>
<material>
<ambient>{color[0]} {color[1]} {color[2]} {color[3]}</ambient>
<diffuse>{color[0]} {color[1]} {color[2]} {color[3]}</diffuse>
<specular>{color[0]} {color[1]} {color[2]} {color[3]}</specular>
</material>
<transparency>{1.0 - color[3]}</transparency>
{'<visibility_flags>1</visibility_flags> <cast_shadows>false</cast_shadows>' if gui_only else ''}
</visual>
""" if visual else ""
}
<inertial>
<mass>{mass}</mass>
<inertia>
<ixx>{inertia_xx_yy_zz}</ixx>
<iyy>{inertia_xx_yy_zz}</iyy>
<izz>{inertia_xx_yy_zz}</izz>
<ixy>0.0</ixy>
<ixz>0.0</ixz>
<iyz>0.0</iyz>
</inertia>
</inertial>
</link>
</model>
</sdf>'''

View file

@ -1,84 +0,0 @@
from gym_gz.scenario import model_wrapper
from gym_gz.utils.scenario import get_unique_model_name
from numpy.random import RandomState
from scenario import core as scenario_core
from scenario import gazebo as scenario_gazebo
from env_manager.models.utils import ModelCollectionRandomizer
class RandomObject(model_wrapper.ModelWrapper):
"""
Represents a randomly selected 3D object model in the Gazebo simulation environment.
This class allows for the insertion of various models based on a collection of model paths,
utilizing a randomization strategy for the chosen model.
Args:
world (scenario_gazebo.World): The Gazebo world where the random object model will be inserted.
name (str, optional): The name of the random object model. Defaults to "object".
position (tuple[float, float, float], optional): The position of the object in the world. Defaults to (0, 0, 0).
orientation (tuple[float, float, float, float], optional): The orientation of the object in quaternion format. Defaults to (1, 0, 0, 0).
model_paths (str | None, optional): Paths to the model files. Must be set; raises an error if None.
owner (str, optional): The owner of the model collection. Defaults to "GoogleResearch".
collection (str, optional): The collection of models to choose from. Defaults to "Google Scanned Objects".
server (str, optional): The server URL for the model collection. Defaults to "https://fuel.ignitionrobotics.org".
server_version (str, optional): The version of the server to use. Defaults to "1.0".
unique_cache (bool, optional): If True, enables caching of unique models. Defaults to False.
reset_collection (bool, optional): If True, resets the model collection for new selections. Defaults to False.
np_random (RandomState | None, optional): An instance of RandomState for random number generation. Defaults to None.
**kwargs: Additional keyword arguments.
Raises:
RuntimeError: If the model path is not set or if the random object model fails to be inserted into the Gazebo world.
"""
def __init__(
self,
world: scenario_gazebo.World,
name: str = "object",
position: tuple[float, float, float] = (0, 0, 0),
orientation: tuple[float, float, float, float] = (1, 0, 0, 0),
model_paths: str | None = None,
owner: str = "GoogleResearch",
collection: str = "Google Scanned Objects",
server: str = "https://fuel.ignitionrobotics.org",
server_version: str = "1.0",
unique_cache: bool = False,
reset_collection: bool = False,
np_random: RandomState | None = None,
**kwargs,
):
# Get a unique model name
if model_paths is None:
raise RuntimeError("Set model path for continue")
model_name = get_unique_model_name(world, name)
# Initial pose
initial_pose = scenario_core.Pose(position, orientation)
model_collection_randomizer = ModelCollectionRandomizer(
model_paths=model_paths,
owner=owner,
collection=collection,
server=server,
server_version=server_version,
unique_cache=unique_cache,
reset_collection=reset_collection,
np_random=np_random,
)
# Note: using default arguments here
modified_sdf_file = model_collection_randomizer.random_model()
# Insert the model
ok_model = world.to_gazebo().insert_model(
modified_sdf_file, initial_pose, model_name
)
if not ok_model:
raise RuntimeError("Failed to insert " + model_name)
# Get the model
model = world.get_model(model_name)
# Initialize base class
model_wrapper.ModelWrapper.__init__(self, model=model)

View file

@ -1,164 +0,0 @@
import numpy as np
from gym_gz.scenario import model_wrapper
from gym_gz.utils.scenario import get_unique_model_name
from numpy.random import RandomState
from scenario import core as scenario_core
from scenario import gazebo as scenario_gazebo
from . import Box, Cylinder, Sphere
class RandomPrimitive(model_wrapper.ModelWrapper):
"""
Represents a randomly generated primitive shape (box, cylinder, or sphere) in the Gazebo simulation environment.
This class allows for the insertion of various primitive models based on the user's specifications or randomly chosen.
Args:
world (scenario_gazebo.World): The Gazebo world where the primitive model will be inserted.
name (str, optional): The name of the primitive model. Defaults to "primitive".
use_specific_primitive ((str | None), optional): If specified, the exact type of primitive to create ('box', 'cylinder', or 'sphere'). Defaults to None, which will randomly select a primitive type.
position (tuple[float, float, float], optional): The position of the primitive in the world. Defaults to (0, 0, 0).
orientation (tuple[float, float, float, float], optional): The orientation of the primitive in quaternion format. Defaults to (1, 0, 0, 0).
static (bool, optional): If True, the primitive will be static and immovable. Defaults to False.
collision (bool, optional): If True, the primitive will have collision properties. Defaults to True.
visual (bool, optional): If True, the primitive will have a visual representation. Defaults to True.
gui_only (bool, optional): If True, the visual representation will only appear in the GUI and not in the simulation physics. Defaults to False.
np_random (RandomState | None, optional): An instance of RandomState for random number generation. If None, a default random generator will be used. Defaults to None.
**kwargs: Additional keyword arguments.
Raises:
RuntimeError: If the primitive model fails to be inserted into the Gazebo world.
TypeError: If the specified primitive type is not supported.
"""
def __init__(
self,
world: scenario_gazebo.World,
name: str = "primitive",
use_specific_primitive: (str | None) = None,
position: tuple[float, float, float] = (0, 0, 0),
orientation: tuple[float, float, float, float] = (1, 0, 0, 0),
static: bool = False,
collision: bool = True,
visual: bool = True,
gui_only: bool = False,
np_random: RandomState | None = None,
**kwargs,
):
if np_random is None:
np_random = np.random.default_rng()
# Get a unique model name
model_name = get_unique_model_name(world, name)
# Initial pose
initial_pose = scenario_core.Pose(position, orientation)
# Create SDF string for the model
sdf = self.get_sdf(
model_name=model_name,
use_specific_primitive=use_specific_primitive,
static=static,
collision=collision,
visual=visual,
gui_only=gui_only,
np_random=np_random,
)
# Insert the model
ok_model = world.to_gazebo().insert_model_from_string(
sdf, initial_pose, model_name
)
if not ok_model:
raise RuntimeError("Failed to insert " + model_name)
# Get the model
model = world.get_model(model_name)
# Initialize base class
model_wrapper.ModelWrapper.__init__(self, model=model)
@classmethod
def get_sdf(
cls,
model_name: str,
use_specific_primitive: (str | None),
static: bool,
collision: bool,
visual: bool,
gui_only: bool,
np_random: RandomState,
) -> str:
"""
Generates the SDF (Simulation Description Format) string for the specified primitive model.
This method can create the SDF representation for a box, cylinder, or sphere based on the provided parameters.
If a specific primitive type is not provided, one will be randomly selected.
Args:
model_name (str): The name of the model being generated.
use_specific_primitive ((str | None)): The specific type of primitive to create ('box', 'cylinder', or 'sphere'). If None, a random primitive will be chosen.
static (bool): If True, the primitive will be static and immovable.
collision (bool): If True, the primitive will have collision properties.
visual (bool): If True, the primitive will have a visual representation.
gui_only (bool): If True, the visual representation will only appear in the GUI and will not affect physics.
np_random (RandomState): An instance of RandomState for random number generation.
Returns:
str: The SDF string that defines the specified primitive model, including its physical and visual properties.
Raises:
TypeError: If the specified primitive type is not supported.
"""
if use_specific_primitive is not None:
primitive = use_specific_primitive
else:
primitive = np_random.choice(["box", "cylinder", "sphere"])
mass = np_random.uniform(0.05, 0.25)
friction = np_random.uniform(0.75, 1.5)
color = tuple(np_random.uniform(0.0, 1.0, (3,)))
color: tuple[float, float, float, float] = (color[0], color[1], color[2], 1.0)
if "box" == primitive:
return Box.get_sdf(
model_name=model_name,
size=tuple(np_random.uniform(0.04, 0.06, (3,))),
mass=mass,
static=static,
collision=collision,
friction=friction,
visual=visual,
gui_only=gui_only,
color=color,
)
elif "cylinder" == primitive:
return Cylinder.get_sdf(
model_name=model_name,
radius=np_random.uniform(0.01, 0.0375),
length=np_random.uniform(0.025, 0.05),
mass=mass,
static=static,
collision=collision,
friction=friction,
visual=visual,
gui_only=gui_only,
color=color,
)
elif "sphere" == primitive:
return Sphere.get_sdf(
model_name=model_name,
radius=np_random.uniform(0.01, 0.0375),
mass=mass,
static=static,
collision=collision,
friction=friction,
visual=visual,
gui_only=gui_only,
color=color,
)
else:
raise TypeError(
f"Type '{use_specific_primitive}' in not a supported primitive. Pleasure use 'box', 'cylinder' or 'sphere."
)

View file

@ -1,16 +0,0 @@
from enum import Enum
from .rbs_arm import RbsArm
from .robot import RobotWrapper
class RobotEnum(Enum):
RBS_ARM = "rbs_arm"
def get_robot_model_class(robot_model: str) -> type[RobotWrapper]:
model_mapping = {
RobotEnum.RBS_ARM.value: RbsArm,
}
return model_mapping.get(robot_model, RbsArm)

View file

@ -1,153 +0,0 @@
from gym_gz.utils.scenario import get_unique_model_name
from rclpy.node import Node
from robot_builder.parser.urdf import URDF_parser
from robot_builder.elements.robot import Robot
from scenario import core as scenario_core
from scenario import gazebo as scenario_gazebo
from .robot import RobotWrapper
class RbsArm(RobotWrapper):
"""
A class representing a robotic arm built using the `robot_builder` library.
This class is responsible for creating a robotic arm model within a Gazebo simulation environment.
It allows for the manipulation of joint positions for both the arm and the gripper.
Attributes:
- DEFAULT_ARM_JOINT_POSITIONS (list[float]): The default joint positions for the arm.
- OPEN_GRIPPER_JOINT_POSITIONS (list[float]): The joint positions for the gripper in the open state.
- CLOSED_GRIPPER_JOINT_POSITIONS (list[float]): The joint positions for the gripper in the closed state.
- DEFAULT_GRIPPER_JOINT_POSITIONS (list[float]): The default joint positions for the gripper.
Args:
- world (scenario_gazebo.World): The Gazebo world where the robot model will be inserted.
- node (Node): The ROS2 node associated with the robotic arm.
- urdf_string (str): The URDF string defining the robot.
- name (str, optional): The name of the robotic arm. Defaults to "rbs_arm".
- position (tuple[float, float, float], optional): The initial position of the robot in the world. Defaults to (0.0, 0.0, 0.0).
- orientation (tuple[float, float, float, float], optional): The initial orientation of the robot in quaternion format. Defaults to (1.0, 0, 0, 0).
- initial_arm_joint_positions (list[float] | float, optional): The initial joint positions for the arm. Defaults to `DEFAULT_ARM_JOINT_POSITIONS`.
- initial_gripper_joint_positions (list[float] | float, optional): The initial joint positions for the gripper. Defaults to `DEFAULT_GRIPPER_JOINT_POSITIONS`.
Raises:
- RuntimeError: If the robot model fails to be inserted into the Gazebo world.
"""
DEFAULT_ARM_JOINT_POSITIONS: list[float] = [
0.0,
0.5,
3.14159,
1.5,
0.0,
1.4,
0.0,
]
OPEN_GRIPPER_JOINT_POSITIONS: list[float] = [
0.064,
]
CLOSED_GRIPPER_JOINT_POSITIONS: list[float] = [
0.0,
]
DEFAULT_GRIPPER_JOINT_POSITIONS: list[float] = CLOSED_GRIPPER_JOINT_POSITIONS
def __init__(
self,
world: scenario_gazebo.World,
node: Node,
urdf_string: str,
name: str = "rbs_arm",
position: tuple[float, float, float] = (0.0, 0.0, 0.0),
orientation: tuple[float, float, float, float] = (1.0, 0, 0, 0),
initial_arm_joint_positions: list[float] | float = DEFAULT_ARM_JOINT_POSITIONS,
initial_gripper_joint_positions: list[float]
| float = DEFAULT_GRIPPER_JOINT_POSITIONS,
):
# Get a unique model name
model_name = get_unique_model_name(world, name)
# Setup initial pose
initial_pose = scenario_core.Pose(position, orientation)
# Insert the model
ok_model = world.insert_model_from_string(urdf_string, initial_pose, model_name)
if not ok_model:
raise RuntimeError("Failed to insert " + model_name)
# Get the model
model = world.get_model(model_name)
# Parse robot to get metadata
self._robot: Robot = URDF_parser.load_string(urdf_string)
self.__initial_gripper_joint_positions = (
[float(initial_gripper_joint_positions)]
* len(self._robot.gripper_actuated_joint_names)
if isinstance(initial_gripper_joint_positions, (int, float))
else initial_gripper_joint_positions
)
self.__initial_arm_joint_positions = (
[float(initial_arm_joint_positions)] * len(self._robot.actuated_joint_names)
if isinstance(initial_arm_joint_positions, (int, float))
else initial_arm_joint_positions
)
# Set initial joint configuration
self.set_initial_joint_positions(model)
super().__init__(model=model)
@property
def robot(self) -> Robot:
"""Returns the robot metadata parsed from the URDF string.
Returns:
Robot: The robot instance containing metadata.
"""
return self._robot
@property
def initial_arm_joint_positions(self) -> list[float]:
"""Returns the initial joint positions for the arm.
Returns:
list[float]: The initial joint positions for the arm.
"""
return self.__initial_arm_joint_positions
@property
def initial_gripper_joint_positions(self) -> list[float]:
"""Returns the initial joint positions for the gripper.
Returns:
list[float]: The initial joint positions for the gripper.
"""
return self.__initial_gripper_joint_positions
def set_initial_joint_positions(self, model):
"""Sets the initial positions for the robot's joints.
This method resets the joint positions of both the arm and gripper to their specified initial values.
Args:
model: The model representation of the robot within the Gazebo environment.
Raises:
RuntimeError: If resetting the joint positions fails for any of the joints.
"""
model = model.to_gazebo()
joint_position_data = [
(self.__initial_arm_joint_positions, self._robot.actuated_joint_names),
(self.__initial_gripper_joint_positions, self._robot.gripper_actuated_joint_names),
]
for positions, joint_names in joint_position_data:
print(f"Setting joint positions for {joint_names}: {positions}")
if not model.reset_joint_positions(positions, joint_names):
raise RuntimeError(f"Failed to set initial positions of {joint_names}'s joints")

View file

@ -1,95 +0,0 @@
from gym_gz.scenario import model_wrapper
from robot_builder.elements.robot import Robot
from scenario import gazebo as scenario_gazebo
from abc import abstractmethod
class RobotWrapper(model_wrapper.ModelWrapper):
"""
An abstract base class for robot models in a Gazebo simulation.
This class extends the ModelWrapper from gym_gz and provides a structure for creating
robot models with specific configurations, including joint positions for arms and grippers.
Args:
world (scenario_gazebo.World, optional): The Gazebo world where the robot model will be inserted.
urdf_string (str, optional): The URDF (Unified Robot Description Format) string defining the robot.
name (str, optional): The name of the robot model. Defaults to None.
position (tuple[float, float, float], optional): The initial position of the robot in the world. Defaults to None.
orientation (tuple[float, float, float, float], optional): The initial orientation of the robot in quaternion format. Defaults to None.
initial_arm_joint_positions (list[float] | float, optional): The initial joint positions for the arm. Defaults to an empty list.
initial_gripper_joint_positions (list[float] | float, optional): The initial joint positions for the gripper. Defaults to an empty list.
model (model_wrapper.ModelWrapper, optional): An existing model instance to initialize the wrapper. Must be provided.
**kwargs: Additional keyword arguments.
Raises:
ValueError: If the model parameter is not provided.
"""
def __init__(
self,
world: scenario_gazebo.World | None = None,
urdf_string: str | None = None,
name: str | None = None,
position: tuple[float, float, float] | None = None,
orientation: tuple[float, float, float, float] | None = None,
initial_arm_joint_positions: list[float] | float = [],
initial_gripper_joint_positions: list[float] | float = [],
model: model_wrapper.ModelWrapper | None = None,
**kwargs,
):
if model is not None:
super().__init__(model=model)
else:
raise ValueError("Model should be defined for the parent class")
@property
@abstractmethod
def robot(self) -> Robot:
"""Returns the robot instance containing metadata.
This property must be implemented by subclasses to return the specific robot metadata.
Returns:
Robot: The robot instance.
"""
pass
@property
@abstractmethod
def initial_gripper_joint_positions(self) -> list[float]:
"""Returns the initial joint positions for the gripper.
This property must be implemented by subclasses to provide the initial positions of the gripper joints.
Returns:
list[float]: The initial joint positions for the gripper.
"""
pass
@property
@abstractmethod
def initial_arm_joint_positions(self) -> list[float]:
"""Returns the initial joint positions for the arm.
This property must be implemented by subclasses to provide the initial positions of the arm joints.
Returns:
list[float]: The initial joint positions for the arm.
"""
pass
@abstractmethod
def set_initial_joint_positions(self, model):
"""Sets the initial positions for the robot's joints.
This method must be implemented by subclasses to reset the joint positions of the robot
to their specified initial values.
Args:
model: The model representation of the robot within the Gazebo environment.
Raises:
RuntimeError: If resetting the joint positions fails.
"""
pass

View file

@ -1 +0,0 @@
from .camera import Camera

View file

@ -1,436 +0,0 @@
import os
from threading import Thread
from gym_gz.scenario import model_wrapper
from gym_gz.utils.scenario import get_unique_model_name
from scenario import core as scenario_core
from scenario import gazebo as scenario_gazebo
from env_manager.models.utils import ModelCollectionRandomizer
class Camera(model_wrapper.ModelWrapper):
"""
Represents a camera model in a Gazebo simulation.
This class extends the ModelWrapper from gym_gz to define a camera model that can be inserted
into the Gazebo world. It supports different types of cameras and can bridge ROS2 topics for
camera data.
Args:
world (scenario_gazebo.World): The Gazebo world where the camera will be inserted.
name (str, optional): The name of the camera model. If None, a unique name is generated.
position (tuple[float, float, float], optional): The initial position of the camera. Defaults to (0, 0, 0).
orientation (tuple[float, float, float, float], optional): The initial orientation of the camera in quaternion. Defaults to (1, 0, 0, 0).
static (bool, optional): If True, the camera is a static model. Defaults to True.
camera_type (str, optional): The type of camera to create. Defaults to "rgbd_camera".
width (int, optional): The width of the camera image. Defaults to 212.
height (int, optional): The height of the camera image. Defaults to 120.
image_format (str, optional): The format of the camera image. Defaults to "R8G8B8".
update_rate (int, optional): The update rate for the camera sensor. Defaults to 15.
horizontal_fov (float, optional): The horizontal field of view for the camera. Defaults to 1.567821.
vertical_fov (float, optional): The vertical field of view for the camera. Defaults to 1.022238.
clip_color (tuple[float, float], optional): The near and far clipping distances for color. Defaults to (0.02, 1000.0).
clip_depth (tuple[float, float], optional): The near and far clipping distances for depth. Defaults to (0.02, 10.0).
noise_mean (float, optional): The mean of the noise added to the camera images. Defaults to None.
noise_stddev (float, optional): The standard deviation of the noise added to the camera images. Defaults to None.
ros2_bridge_color (bool, optional): If True, a ROS2 bridge for color images is created. Defaults to False.
ros2_bridge_depth (bool, optional): If True, a ROS2 bridge for depth images is created. Defaults to False.
ros2_bridge_points (bool, optional): If True, a ROS2 bridge for point cloud data is created. Defaults to False.
visibility_mask (int, optional): The visibility mask for the camera sensor. Defaults to 0.
visual (str, optional): The type of visual representation for the camera. Defaults to None.
Raises:
ValueError: If the visual mesh or textures cannot be found.
RuntimeError: If the camera model fails to insert into the Gazebo world.
"""
def __init__(
self,
world: scenario_gazebo.World,
name: str | None = None,
position: tuple[float, float, float] = (0, 0, 0),
orientation: tuple[float, float, float, float] = (1, 0, 0, 0),
static: bool = True,
camera_type: str = "rgbd_camera",
width: int = 212,
height: int = 120,
image_format: str = "R8G8B8",
update_rate: int = 15,
horizontal_fov: float = 1.567821,
vertical_fov: float = 1.022238,
clip_color: tuple[float, float] = (0.02, 1000.0),
clip_depth: tuple[float, float] = (0.02, 10.0),
noise_mean: float | None = None,
noise_stddev: float | None = None,
ros2_bridge_color: bool = False,
ros2_bridge_depth: bool = False,
ros2_bridge_points: bool = False,
visibility_mask: int = 0,
visual: str | None = None,
):
# Get a unique model name
if name is not None:
model_name = get_unique_model_name(world, name)
else:
model_name = get_unique_model_name(world, camera_type)
self._model_name = model_name
self._camera_type = camera_type
# Initial pose
initial_pose = scenario_core.Pose(position, orientation)
use_mesh: bool = False
mesh_path_visual: str = ""
albedo_map = None
normal_map = None
roughness_map = None
metalness_map = None
# Get resources for visual (if enabled)
if visual:
if "intel_realsense_d435" == visual:
use_mesh = True
# Get path to the model and the important directories
model_path = ModelCollectionRandomizer.get_collection_paths(
owner="OpenRobotics",
collection="",
model_name="Intel RealSense D435",
)[0]
mesh_dir = os.path.join(model_path, "meshes")
texture_dir = os.path.join(model_path, "materials", "textures")
# Get path to the mesh
mesh_path_visual = os.path.join(mesh_dir, "realsense.dae")
# Make sure that it exists
if not os.path.exists(mesh_path_visual):
raise ValueError(
f"Visual mesh '{mesh_path_visual}' for Camera model is not a valid file."
)
# Find PBR textures
if texture_dir:
# List all files
texture_files = os.listdir(texture_dir)
# Extract the appropriate files
for texture in texture_files:
texture_lower = texture.lower()
if "basecolor" in texture_lower or "albedo" in texture_lower:
albedo_map = os.path.join(texture_dir, texture)
elif "normal" in texture_lower:
normal_map = os.path.join(texture_dir, texture)
elif "roughness" in texture_lower:
roughness_map = os.path.join(texture_dir, texture)
elif (
"specular" in texture_lower or "metalness" in texture_lower
):
metalness_map = os.path.join(texture_dir, texture)
if not (albedo_map and normal_map and roughness_map and metalness_map):
raise ValueError("Not all textures for Camera model were found.")
# Create SDF string for the model
sdf = f'''<sdf version="1.9">
<model name="{model_name}">
<static>{static}</static>
<link name="{self.link_name}">
<sensor name="camera" type="{camera_type}">
<topic>{model_name}</topic>
<always_on>true</always_on>
<update_rate>{update_rate}</update_rate>
<camera name="{model_name}_camera">

<horizontal_fov>{horizontal_fov}</horizontal_fov>
<vertical_fov>{vertical_fov}</vertical_fov>
<clip>
<near>{clip_color[0]}</near>
<far>{clip_color[1]}</far>
</clip>
{
f"""<depth_camera>
<clip>
<near>{clip_depth[0]}</near>
<far>{clip_depth[1]}</far>
</clip>
</depth_camera>""" if "rgbd" in model_name else ""
}
{
f"""<noise>
<type>gaussian</type>
<mean>{noise_mean}</mean>
<stddev>{noise_stddev}</stddev>
</noise>""" if noise_mean is not None and noise_stddev is not None else ""
}
<visibility_mask>{visibility_mask}</visibility_mask>
</camera>
<visualize>true</visualize>
</sensor>
{
f"""
<visual name="{model_name}_visual_lens">
<pose>-0.01 0 0 0 1.5707963 0</pose>
<geometry>
<cylinder>
<radius>0.02</radius>
<length>0.02</length>
</cylinder>
</geometry>
<material>
<ambient>0.0 0.8 0.0</ambient>
<diffuse>0.0 0.8 0.0</diffuse>
<specular>0.0 0.8 0.0</specular>
</material>
</visual>
<visual name="{model_name}_visual_body">
<pose>-0.05 0 0 0 0 0</pose>
<geometry>
<box>
<size>0.06 0.05 0.05</size>
</box>
</geometry>
<material>
<ambient>0.0 0.8 0.0</ambient>
<diffuse>0.0 0.8 0.0</diffuse>
<specular>0.0 0.8 0.0</specular>
</material>
</visual>
""" if visual and not use_mesh else ""
}
{
f"""
<inertial>
<mass>0.0615752</mass>
<inertia>
<ixx>9.108e-05</ixx>
<ixy>0.0</ixy>
<ixz>0.0</ixz>
<iyy>2.51e-06</iyy>
<iyz>0.0</iyz>
<izz>8.931e-05</izz>
</inertia>
</inertial>
<visual name="{model_name}_visual">
<pose>0 0 0 0 0 1.5707963</pose>
<geometry>
<mesh>
<uri>{mesh_path_visual}</uri>
<submesh>
<name>RealSense</name>
<center>false</center>
</submesh>
</mesh>
</geometry>
<material>
<diffuse>1 1 1 1</diffuse>
<specular>1 1 1 1</specular>
<pbr>
<metal>
<albedo_map>{albedo_map}</albedo_map>
<normal_map>{normal_map}</normal_map>
<roughness_map>{roughness_map}</roughness_map>
<metalness_map>{metalness_map}</metalness_map>
</metal>
</pbr>
</material>
</visual>
""" if visual and use_mesh else ""
}
</link>
</model>
</sdf>'''
# Insert the model
ok_model = world.to_gazebo().insert_model_from_string(
sdf, initial_pose, model_name
)
if not ok_model:
raise RuntimeError("Failed to insert " + model_name)
# Get the model
model = world.get_model(model_name)
# Initialize base class
model_wrapper.ModelWrapper.__init__(self, model=model)
if ros2_bridge_color or ros2_bridge_depth or ros2_bridge_points:
self.__threads = []
self.__threads.append(
Thread(
target=self.construct_ros2_bridge,
args=(
self.info_topic,
"sensor_msgs/msg/CameraInfo",
"gz.msgs.CameraInfo",
),
daemon=True,
)
)
if ros2_bridge_color:
self.__threads.append(
Thread(
target=self.construct_ros2_bridge,
args=(
self.color_topic,
"sensor_msgs/msg/Image",
"gz.msgs.Image",
),
daemon=True,
)
)
if ros2_bridge_depth:
self.__threads.append(
Thread(
target=self.construct_ros2_bridge,
args=(
self.depth_topic,
"sensor_msgs/msg/Image",
"gz.msgs.Image",
),
daemon=True,
)
)
if ros2_bridge_points:
self.__threads.append(
Thread(
target=self.construct_ros2_bridge,
args=(
self.points_topic,
"sensor_msgs/msg/PointCloud2",
"gz.msgs.PointCloudPacked",
),
daemon=True,
)
)
for thread in self.__threads:
thread.start()
def __del__(self):
"""Cleans up threads when the Camera object is deleted."""
if hasattr(self, "__threads"):
for thread in self.__threads:
thread.join()
@classmethod
def construct_ros2_bridge(cls, topic: str, ros_msg: str, ign_msg: str):
"""
Constructs and runs a ROS2 bridge command for a given topic.
Args:
topic (str): The topic to bridge.
ros_msg (str): The ROS2 message type to use.
ign_msg (str): The Ignition message type to use.
"""
node_name = "parameter_bridge" + topic.replace("/", "_")
command = (
f"ros2 run ros_gz_bridge parameter_bridge {topic}@{ros_msg}[{ign_msg} "
+ f"--ros-args --remap __node:={node_name} --ros-args -p use_sim_time:=true"
)
os.system(command)
@classmethod
def get_frame_id(cls, model_name: str) -> str:
"""
Gets the frame ID for the camera model.
Args:
model_name (str): The name of the camera model.
Returns:
str: The frame ID.
"""
return f"{model_name}/{model_name}_link/camera"
@property
def frame_id(self) -> str:
"""Returns the frame ID of the camera."""
return self.get_frame_id(self._model_name)
@classmethod
def get_color_topic(cls, model_name: str, camera_type: str) -> str:
"""
Gets the color topic for the camera.
Args:
model_name (str): The name of the camera model.
camera_type (str): The type of the camera.
Returns:
str: The color topic.
"""
return f"/{model_name}/image" if "rgbd" in camera_type else f"/{model_name}"
@property
def color_topic(self) -> str:
"""Returns the color topic for the camera."""
return self.get_color_topic(self._model_name, self._camera_type)
@classmethod
def get_depth_topic(cls, model_name: str, camera_type: str) -> str:
"""
Gets the depth topic for the camera.
Args:
model_name (str): The name of the camera model.
camera_type (str): The type of the camera.
Returns:
str: The depth topic.
"""
return (
f"/{model_name}/depth_image" if "rgbd" in camera_type else f"/{model_name}"
)
@property
def depth_topic(self) -> str:
"""Returns the depth topic for the camera."""
return self.get_depth_topic(self._model_name, self._camera_type)
@classmethod
def get_points_topic(cls, model_name: str) -> str:
"""
Gets the points topic for the camera.
Args:
model_name (str): The name of the camera model.
Returns:
str: /{model_name}/points.
"""
return f"/{model_name}/points"
@property
def points_topic(self) -> str:
"""Returns the points topic for the camera."""
return self.get_points_topic(self._model_name)
@property
def info_topic(self):
"""Returns the camera info topic."""
return f"/{self._model_name}/camera_info"
@classmethod
def get_link_name(cls, model_name: str) -> str:
"""
Gets the link name for the camera model.
Args:
model_name (str): The name of the camera model.
Returns:
str: {model_name}_link.
"""
return f"{model_name}_link"
@property
def link_name(self) -> str:
"""Returns the link name for the camera."""
return self.get_link_name(self._model_name)

View file

@ -1,25 +0,0 @@
# from gym_gz.scenario.model_wrapper import ModelWrapper
from .ground import Ground
# def get_terrain_model_class(terrain_type: str) -> type[ModelWrapper]:
# # TODO: Refactor into enum
#
# if "flat" == terrain_type:
# return Ground
# elif "random_flat" == terrain_type:
# return RandomGround
# elif "lunar_heightmap" == terrain_type:
# return LunarHeightmap
# elif "lunar_surface" == terrain_type:
# return LunarSurface
# elif "random_lunar_surface" == terrain_type:
# return RandomLunarSurface
# else:
# raise AttributeError(f"Unsupported terrain [{terrain_type}]")
#
#
# def is_terrain_type_randomizable(terrain_type: str) -> bool:
#
# return "random_flat" == terrain_type or "random_lunar_surface" == terrain_type

View file

@ -1,104 +0,0 @@
from gym_gz.scenario import model_wrapper
from gym_gz.utils.scenario import get_unique_model_name
from scenario import core as scenario_core
from scenario import gazebo as scenario_gazebo
class Ground(model_wrapper.ModelWrapper):
"""
Represents a ground model in a Gazebo simulation.
This class extends the ModelWrapper from gym_gz to define a ground model that can be
inserted into the Gazebo world. The ground is defined by its size, position, orientation,
and friction properties.
Args:
world (scenario_gazebo.World): The Gazebo world where the ground will be inserted.
name (str, optional): The name of the ground model. Defaults to "ground".
position (tuple[float, float, float], optional): The initial position of the ground. Defaults to (0, 0, 0).
orientation (tuple[float, float, float, float], optional): The initial orientation of the ground in quaternion. Defaults to (1, 0, 0, 0).
size (tuple[float, float], optional): The size of the ground plane. Defaults to (1.5, 1.5).
collision_thickness (float, optional): The thickness of the collision surface. Defaults to 0.05.
friction (float, optional): The friction coefficient for the ground surface. Defaults to 5.0.
ambient (tuple[float, float, float, float], optional): The ambient color of the material. Defaults to (0.8, 0.8, 0.8, 1.0).
specular (tuple[float, float, float, float], optional): The specular color of the material. Defaults to (0.8, 0.8, 0.8, 1.0).
diffuse (tuple[float, float, float, float], optional): The diffuse color of the material. Defaults to (0.8, 0.8, 0.8, 1.0).
**kwargs: Additional keyword arguments for future extensions.
Raises:
RuntimeError: If the ground model fails to insert into the Gazebo world.
"""
def __init__(
self,
world: scenario_gazebo.World,
name: str = "ground",
position: tuple[float, float, float] = (0, 0, 0),
orientation: tuple[float, float, float, float] = (1, 0, 0, 0),
size: tuple[float, float] = (1.5, 1.5),
collision_thickness=0.05,
friction: float = 5.0,
ambient: tuple[float, float, float, float] = (0.8, 0.8, 0.8, 1.0),
specular: tuple[float, float, float, float] = (0.8, 0.8, 0.8, 1.0),
diffuse: tuple[float, float, float, float] = (0.8, 0.8, 0.8, 1.0),
**kwargs,
):
# Get a unique model name
model_name = get_unique_model_name(world, name)
# Initial pose
initial_pose = scenario_core.Pose(position, orientation)
# Create SDF string for the model with the provided material properties
sdf = f"""<sdf version="1.9">
<model name="{model_name}">
<static>true</static>
<link name="{model_name}_link">
<collision name="{model_name}_collision">
<geometry>
<plane>
<normal>0 0 1</normal>
<size>{size[0]} {size[1]}</size>
</plane>
</geometry>
<surface>
<friction>
<ode>
<mu>{friction}</mu>
<mu2>{friction}</mu2>
<fdir1>0 0 0</fdir1>
<slip1>0.0</slip1>
<slip2>0.0</slip2>
</ode>
</friction>
</surface>
</collision>
<visual name="{model_name}_visual">
<geometry>
<plane>
<normal>0 0 1</normal>
<size>{size[0]} {size[1]}</size>
</plane>
</geometry>
<material>
<ambient>{' '.join(map(str, ambient))}</ambient>
<diffuse>{' '.join(map(str, diffuse))}</diffuse>
<specular>{' '.join(map(str, specular))}</specular>
</material>
</visual>
</link>
</model>
</sdf>"""
# Insert the model
ok_model = world.to_gazebo().insert_model_from_string(
sdf, initial_pose, model_name
)
if not ok_model:
raise RuntimeError("Failed to insert " + model_name)
# Get the model
model = world.get_model(model_name)
# Initialize base class
model_wrapper.ModelWrapper.__init__(self, model=model)

View file

@ -1,137 +0,0 @@
import os
import numpy as np
from gym_gz.scenario import model_wrapper
from gym_gz.utils.scenario import get_unique_model_name
from scenario import core as scenario_core
from scenario import gazebo as scenario_gazebo
from rbs_assets_library import get_textures_path
class RandomGround(model_wrapper.ModelWrapper):
def __init__(
self,
world: scenario_gazebo.World,
name: str = "random_ground",
position: tuple[float, float, float] = (0, 0, 0),
orientation: tuple[float, float, float, float] = (1, 0, 0, 0),
size: tuple[float, float] = (1.0, 1.0),
friction: float = 5.0,
texture_dir: str | None = None,
**kwargs,
):
np_random = np.random.default_rng()
# Get a unique model name
model_name = get_unique_model_name(world, name)
# Initial pose
initial_pose = scenario_core.Pose(position, orientation)
# Get textures from ENV variable if not directly specified
if not texture_dir:
texture_dir = os.environ.get("TEXTURE_DIRS", default="")
if not texture_dir:
texture_dir = get_textures_path()
# Find random PBR texture
albedo_map = None
normal_map = None
roughness_map = None
metalness_map = None
if texture_dir:
if ":" in texture_dir:
textures = []
for d in texture_dir.split(":"):
textures.extend([os.path.join(d, f) for f in os.listdir(d)])
else:
# Get list of the available textures
textures = os.listdir(texture_dir)
# Choose a random texture from these
random_texture_dir = str(np_random.choice(textures))
random_texture_dir = texture_dir + random_texture_dir
# List all files
texture_files = os.listdir(random_texture_dir)
# Extract the appropriate files
for texture in texture_files:
texture_lower = texture.lower()
if "color" in texture_lower or "albedo" in texture_lower:
albedo_map = os.path.join(random_texture_dir, texture)
elif "normal" in texture_lower:
normal_map = os.path.join(random_texture_dir, texture)
elif "roughness" in texture_lower:
roughness_map = os.path.join(random_texture_dir, texture)
elif "specular" in texture_lower or "metalness" in texture_lower:
metalness_map = os.path.join(random_texture_dir, texture)
# Create SDF string for the model
sdf = f"""<sdf version="1.9">
<model name="{model_name}">
<static>true</static>
<link name="{model_name}_link">
<collision name="{model_name}_collision">
<geometry>
<plane>
<normal>0 0 1</normal>
<size>{size[0]} {size[1]}</size>
</plane>
</geometry>
<surface>
<friction>
<ode>
<mu>{friction}</mu>
<mu2>{friction}</mu2>
<fdir1>0 0 0</fdir1>
<slip1>0.0</slip1>
<slip2>0.0</slip2>
</ode>
</friction>
</surface>
</collision>
<visual name="{model_name}_visual">
<geometry>
<plane>
<normal>0 0 1</normal>
<size>{size[0]} {size[1]}</size>
</plane>
</geometry>
<material>
<ambient>1 1 1 1</ambient>
<diffuse>1 1 1 1</diffuse>
<specular>1 1 1 1</specular>
<pbr>
<metal>
{"<albedo_map>%s</albedo_map>"
% albedo_map if albedo_map is not None else ""}
{"<normal_map>%s</normal_map>"
% normal_map if normal_map is not None else ""}
{"<roughness_map>%s</roughness_map>"
% roughness_map if roughness_map is not None else ""}
{"<metalness_map>%s</metalness_map>"
% metalness_map if metalness_map is not None else ""}
</metal>
</pbr>
</material>
</visual>
</link>
</model>
</sdf>"""
# Insert the model
ok_model = world.to_gazebo().insert_model_from_string(
sdf, initial_pose, model_name
)
if not ok_model:
raise RuntimeError("Failed to insert " + model_name)
# Get the model
model = world.get_model(model_name)
# Initialize base class
model_wrapper.ModelWrapper.__init__(self, model=model)

View file

@ -1,2 +0,0 @@
from .model_collection_randomizer import ModelCollectionRandomizer
from .xacro2sdf import xacro2sdf

View file

@ -1,37 +0,0 @@
import subprocess
import tempfile
from typing import Dict, Optional, Tuple
import xacro
def xacro2sdf(
input_file_path: str, mappings: Dict, model_path_remap: Optional[Tuple[str, str]]
) -> str:
"""Convert xacro (URDF variant) with given arguments to SDF and return as a string."""
# Convert all values in mappings to strings
for keys, values in mappings.items():
mappings[keys] = str(values)
# Convert xacro to URDF
urdf_xml = xacro.process(input_file_name=input_file_path, mappings=mappings)
# Create temporary file for URDF (`ign sdf -p` accepts only files)
with tempfile.NamedTemporaryFile() as tmp_urdf:
with open(tmp_urdf.name, "w") as urdf_file:
urdf_file.write(urdf_xml)
# Convert to SDF
result = subprocess.run(
["ign", "sdf", "-p", tmp_urdf.name], stdout=subprocess.PIPE
)
sdf_xml = result.stdout.decode("utf-8")
# Remap package name to model name, such that meshes can be located by Ignition
if model_path_remap is not None:
sdf_xml = sdf_xml.replace(model_path_remap[0], model_path_remap[1])
# Return as string
return sdf_xml

View file

@ -1 +0,0 @@
from .scene import Scene

File diff suppressed because it is too large Load diff

View file

@ -1,4 +0,0 @@
from . import conversions, gazebo, logging, math, types
from .tf2_broadcaster import Tf2Broadcaster, Tf2BroadcasterStandalone
from .tf2_dynamic_broadcaster import Tf2DynamicBroadcaster
from .tf2_listener import Tf2Listener, Tf2ListenerStandalone

View file

@ -1,182 +0,0 @@
from typing import Tuple, Union
import numpy
# import open3d
# import pyoctree
import sensor_msgs
from scipy.spatial.transform import Rotation
# from sensor_msgs.msg import PointCloud2
# from open3d.geometry import PointCloud
from geometry_msgs.msg import Transform
# def pointcloud2_to_open3d(
# ros_point_cloud2: PointCloud2,
# include_color: bool = False,
# include_intensity: bool = False,
# # Note: Order does not matter for DL, that's why channel swapping is disabled by default
# fix_rgb_channel_order: bool = False,
# ) -> PointCloud:
#
# # Create output Open3D PointCloud
# open3d_pc = PointCloud()
#
# size = ros_point_cloud2.width * ros_point_cloud2.height
# xyz_dtype = ">f4" if ros_point_cloud2.is_bigendian else "<f4"
# xyz = numpy.ndarray(
# shape=(size, 3),
# dtype=xyz_dtype,
# buffer=ros_point_cloud2.data,
# offset=0,
# strides=(ros_point_cloud2.point_step, 4),
# )
#
# valid_points = numpy.isfinite(xyz).any(axis=1)
# open3d_pc.points = open3d.utility.Vector3dVector(
# xyz[valid_points].astype(numpy.float64)
# )
#
# if include_color or include_intensity:
# if len(ros_point_cloud2.fields) > 3:
# bgr = numpy.ndarray(
# shape=(size, 3),
# dtype=numpy.uint8,
# buffer=ros_point_cloud2.data,
# offset=ros_point_cloud2.fields[3].offset,
# strides=(ros_point_cloud2.point_step, 1),
# )
# if fix_rgb_channel_order:
# # Swap channels to gain rgb (faster than `bgr[:, [2, 1, 0]]`)
# bgr[:, 0], bgr[:, 2] = bgr[:, 2], bgr[:, 0].copy()
# open3d_pc.colors = open3d.utility.Vector3dVector(
# (bgr[valid_points] / 255).astype(numpy.float64)
# )
# else:
# open3d_pc.colors = open3d.utility.Vector3dVector(
# numpy.zeros((len(valid_points), 3), dtype=numpy.float64)
# )
# # TODO: Update octree creator once L8 image format is supported in Ignition Gazebo
# # elif include_intensity:
# # # Faster approach, but only the first channel gets the intensity value (rest is 0)
# # intensities = numpy.zeros((len(valid_points), 3), dtype=numpy.float64)
# # intensities[:, [0]] = (
# # numpy.ndarray(
# # shape=(size, 1),
# # dtype=numpy.uint8,
# # buffer=ros_point_cloud2.data,
# # offset=ros_point_cloud2.fields[3].offset,
# # strides=(ros_point_cloud2.point_step, 1),
# # )[valid_points]
# # / 255
# # ).astype(numpy.float64)
# # open3d_pc.colors = open3d.utility.Vector3dVector(intensities)
# # # # Slower approach, but all channels get the intensity value
# # # intensities = numpy.ndarray(
# # # shape=(size, 1),
# # # dtype=numpy.uint8,
# # # buffer=ros_point_cloud2.data,
# # # offset=ros_point_cloud2.fields[3].offset,
# # # strides=(ros_point_cloud2.point_step, 1),
# # # )
# # # open3d_pc.colors = open3d.utility.Vector3dVector(
# # # numpy.tile(intensities[valid_points] / 255, (1, 3)).astype(numpy.float64)
# # # )
#
# # Return the converted Open3D PointCloud
# return open3d_pc
def transform_to_matrix(transform: Transform) -> numpy.ndarray:
transform_matrix = numpy.zeros((4, 4))
transform_matrix[3, 3] = 1.0
transform_matrix[0:3, 0:3] = open3d.geometry.get_rotation_matrix_from_quaternion(
[
transform.rotation.w,
transform.rotation.x,
transform.rotation.y,
transform.rotation.z,
]
)
transform_matrix[0, 3] = transform.translation.x
transform_matrix[1, 3] = transform.translation.y
transform_matrix[2, 3] = transform.translation.z
return transform_matrix
# def open3d_point_cloud_to_octree_points(
# open3d_point_cloud: PointCloud,
# include_color: bool = False,
# include_intensity: bool = False,
# ) -> pyoctree.Points:
#
# octree_points = pyoctree.Points()
#
# if include_color:
# features = numpy.reshape(numpy.asarray(open3d_point_cloud.colors), -1)
# elif include_intensity:
# features = numpy.asarray(open3d_point_cloud.colors)[:, 0]
# else:
# features = []
#
# octree_points.set_points(
# # XYZ points
# numpy.reshape(numpy.asarray(open3d_point_cloud.points), -1),
# # Normals
# numpy.reshape(numpy.asarray(open3d_point_cloud.normals), -1),
# # Other features, e.g. color
# features,
# # Labels - not used
# [],
# )
#
# return octree_points
def orientation_6d_to_quat(
v1: Tuple[float, float, float], v2: Tuple[float, float, float]
) -> Tuple[float, float, float, float]:
# Normalize vectors
col1 = v1 / numpy.linalg.norm(v1)
col2 = v2 / numpy.linalg.norm(v2)
# Find their orthogonal vector via cross product
col3 = numpy.cross(col1, col2)
# Stack into rotation matrix as columns, convert to quaternion and return
quat_xyzw = Rotation.from_matrix(numpy.array([col1, col2, col3]).T).as_quat()
return quat_xyzw
def orientation_quat_to_6d(
quat_xyzw: Tuple[float, float, float, float]
) -> Tuple[Tuple[float, float, float], Tuple[float, float, float]]:
# Convert quaternion into rotation matrix
rot_mat = Rotation.from_quat(quat_xyzw).as_matrix()
# Return first two columns (already normalised)
return (tuple(rot_mat[:, 0]), tuple(rot_mat[:, 1]))
def quat_to_wxyz(
xyzw: tuple[float, float, float, float]
) -> tuple[float, float, float, float]:
return (xyzw[3], xyzw[0], xyzw[1], xyzw[2])
# return xyzw[[3, 0, 1, 2]]
def quat_to_xyzw(
wxyz: Union[numpy.ndarray, Tuple[float, float, float, float]]
) -> numpy.ndarray:
if isinstance(wxyz, tuple):
return (wxyz[1], wxyz[2], wxyz[3], wxyz[0])
return wxyz[[1, 2, 3, 0]]

View file

@ -1,262 +0,0 @@
from typing import Tuple, Union
from gym_gz.scenario.model_wrapper import ModelWrapper
from numpy import exp
from scenario.bindings.core import Link, World
from scipy.spatial.transform import Rotation
from .conversions import quat_to_wxyz, quat_to_xyzw
from .math import quat_mul
from .types import Pose, Point, Quat
#NOTE: Errors in pyright will be fixed only with pybind11 in the scenario module
def get_model_pose(
world: World,
model: ModelWrapper | str,
link: Link | str | None = None,
xyzw: bool = False,
) -> Pose:
"""
Return pose of model's link. Orientation is represented as wxyz quaternion or xyzw based on the passed argument `xyzw`.
"""
if isinstance(model, str):
# Get reference to the model from its name if string is passed
model = world.to_gazebo().get_model(model).to_gazebo()
if link is None:
# Use the first link if not specified
link = model.get_link(link_name=model.link_names()[0])
elif isinstance(link, str):
# Get reference to the link from its name if string
link = model.get_link(link_name=link)
# Get position and orientation
position = link.position()
quat = link.orientation()
# Convert to xyzw order if desired
if xyzw:
quat = quat_to_xyzw(quat)
# Return pose of the model's link
return (
position,
quat,
)
def get_model_position(
world: World,
model: ModelWrapper | str,
link: Link | str | None = None,
) -> Point:
"""
Return position of model's link.
"""
if isinstance(model, str):
# Get reference to the model from its name if string is passed
model = world.to_gazebo().get_model(model).to_gazebo()
if link is None:
# Use the first link if not specified
link = model.get_link(link_name=model.link_names()[0])
elif isinstance(link, str):
# Get reference to the link from its name if string
link = model.get_link(link_name=link)
# Return position of the model's link
return link.position()
def get_model_orientation(
world: World,
model: ModelWrapper | str,
link: Link | str | None = None,
xyzw: bool = False,
) -> Quat:
"""
Return orientation of model's link that is represented as wxyz quaternion or xyzw based on the passed argument `xyzw`.
"""
if isinstance(model, str):
# Get reference to the model from its name if string is passed
model = world.to_gazebo().get_model(model).to_gazebo()
if link is None:
# Use the first link if not specified
link = model.get_link(link_name=model.link_names()[0])
elif isinstance(link, str):
# Get reference to the link from its name if string
link = model.get_link(link_name=link)
# Get orientation
quat = link.orientation()
# Convert to xyzw order if desired
if xyzw:
quat = quat_to_xyzw(quat)
# Return orientation of the model's link
return quat
def transform_move_to_model_pose(
world: World,
position: Point,
quat: Quat,
target_model: ModelWrapper | str,
target_link: Link | str | None = None,
xyzw: bool = False,
) -> Pose:
"""
Transform such that original `position` and `quat` are represented with respect to `target_model::target_link`.
The resulting pose is still represented in world coordinate system.
"""
target_frame_position, target_frame_quat = get_model_pose(
world,
model=target_model,
link=target_link,
xyzw=True,
)
transformed_position = Rotation.from_quat(target_frame_quat).apply(position)
transformed_position = (
transformed_position[0] + target_frame_position[0],
transformed_position[1] + target_frame_position[1],
transformed_position[2] + target_frame_position[2],
)
if not xyzw:
target_frame_quat = quat_to_wxyz(target_frame_quat)
transformed_quat = quat_mul(quat, target_frame_quat, xyzw=xyzw)
return (transformed_position, transformed_quat)
def transform_move_to_model_position(
world: World,
position: Point,
target_model: ModelWrapper | str,
target_link: Link | str | None = None,
) -> Point:
target_frame_position, target_frame_quat_xyzw = get_model_pose(
world,
model=target_model,
link=target_link,
xyzw=True,
)
transformed_position = Rotation.from_quat(target_frame_quat_xyzw).apply(position)
transformed_position = (
target_frame_position[0] + transformed_position[0],
target_frame_position[1] + transformed_position[1],
target_frame_position[2] + transformed_position[2],
)
return transformed_position
def transform_move_to_model_orientation(
world: World,
quat: Quat,
target_model: ModelWrapper | str,
target_link: Link | str | None = None,
xyzw: bool = False,
) -> Quat:
target_frame_quat = get_model_orientation(
world,
model=target_model,
link=target_link,
xyzw=xyzw,
)
transformed_quat = quat_mul(quat, target_frame_quat, xyzw=xyzw)
return transformed_quat
def transform_change_reference_frame_pose(
world: World,
position: Point,
quat: Quat,
target_model: ModelWrapper | str,
target_link: Link | str | None,
xyzw: bool = False,
) -> Pose:
"""
Change reference frame of original `position` and `quat` from world coordinate system to `target_model::target_link` coordinate system.
"""
target_frame_position, target_frame_quat = get_model_pose(
world,
model=target_model,
link=target_link,
xyzw=True,
)
transformed_position = (
position[0] - target_frame_position[0],
position[1] - target_frame_position[1],
position[2] - target_frame_position[2],
)
transformed_position = Rotation.from_quat(target_frame_quat).apply(
transformed_position, inverse=True
)
if not xyzw:
target_frame_quat = quat_to_wxyz(target_frame_quat)
transformed_quat = quat_mul(target_frame_quat, quat, xyzw=xyzw)
return (tuple(transformed_position), transformed_quat)
def transform_change_reference_frame_position(
world: World,
position: Point,
target_model: ModelWrapper | str,
target_link: Link | str | None = None,
) -> Point:
target_frame_position, target_frame_quat_xyzw = get_model_pose(
world,
model=target_model,
link=target_link,
xyzw=True,
)
transformed_position = (
position[0] - target_frame_position[0],
position[1] - target_frame_position[1],
position[2] - target_frame_position[2],
)
transformed_position = Rotation.from_quat(target_frame_quat_xyzw).apply(
transformed_position, inverse=True
)
return tuple(transformed_position)
def transform_change_reference_frame_orientation(
world: World,
quat: Quat,
target_model: ModelWrapper | str,
target_link: Link | str | None = None,
xyzw: bool = False,
) -> Quat:
target_frame_quat = get_model_orientation(
world,
model=target_model,
link=target_link,
xyzw=xyzw,
)
transformed_quat = quat_mul(target_frame_quat, quat, xyzw=xyzw)
return transformed_quat

View file

@ -1,488 +0,0 @@
import numpy as np
from gym_gz.scenario.model_wrapper import ModelWrapper
from rclpy.node import Node
from scenario.bindings.core import World
from scipy.spatial.transform import Rotation
from .conversions import orientation_6d_to_quat
from .gazebo import (
get_model_orientation,
get_model_pose,
get_model_position,
transform_change_reference_frame_orientation,
transform_change_reference_frame_pose,
transform_change_reference_frame_position,
)
from .math import quat_mul
from .tf2_listener import Tf2Listener
from .types import Point, Pose, PoseRpy, Quat, Rpy
# Helper functions #
def get_relative_ee_position(self, translation: Point) -> Point:
# Scale relative action to metric units
translation = self.scale_relative_translation(translation)
# Get current position
current_position = self.get_ee_position()
# Compute target position
target_position = (
current_position[0] + translation[0],
current_position[1] + translation[1],
current_position[2] + translation[2],
)
# Restrict target position to a limited workspace, if desired
if self._restrict_position_goal_to_workspace:
target_position = self.restrict_position_goal_to_workspace(target_position)
return target_position
def get_relative_ee_orientation(
self,
rotation: float | Quat | PoseRpy,
representation: str = "quat",
) -> Quat:
current_quat_xyzw = self.get_ee_orientation()
if representation == "z":
current_yaw = Rotation.from_quat(current_quat_xyzw).as_euler("xyz")[2]
current_quat_xyzw = Rotation.from_euler(
"xyz", [np.pi, 0, current_yaw]
).as_quat()
if isinstance(rotation, tuple):
if len(rotation) == 4 and representation == "quat":
relative_quat_xyzw = rotation
elif len(rotation) == 6 and representation == "6d":
vectors = (rotation[0:3], rotation[3:6])
relative_quat_xyzw = orientation_6d_to_quat(vectors[0], vectors[1])
else:
raise ValueError("Invalid rotation tuple length for representation.")
elif isinstance(rotation, float) and representation == "z":
rotation = self.scale_relative_rotation(rotation)
relative_quat_xyzw = Rotation.from_euler("xyz", [0, 0, rotation]).as_quat()
else:
raise TypeError("Invalid type for rotation or representation.")
target_quat_xyzw = quat_mul(tuple(current_quat_xyzw), tuple(relative_quat_xyzw))
target_quat_xyzw = normalize_quaternion(tuple(relative_quat_xyzw))
return target_quat_xyzw
def normalize_quaternion(
target_quat_xyzw: tuple[float, float, float, float],
) -> tuple[float, float, float, float]:
quat_array = np.array(target_quat_xyzw)
normalized_quat = quat_array / np.linalg.norm(quat_array)
return tuple(normalized_quat)
def scale_relative_translation(self, translation: Point) -> Point:
return (
self.__scaling_factor_translation * translation[0],
self.__scaling_factor_translation * translation[1],
self.__scaling_factor_translation * translation[2],
)
def scale_relative_rotation(
self, rotation: float | Rpy | np.floating | np.ndarray
) -> float | Rpy:
scaling_factor = self.__scaling_factor_rotation
if isinstance(rotation, (int, float, np.floating)):
return scaling_factor * rotation
return tuple(scaling_factor * r for r in rotation)
def restrict_position_goal_to_workspace(self, position: Point) -> Point:
return (
min(
self.workspace_max_bound[0],
max(
self.workspace_min_bound[0],
position[0],
),
),
min(
self.workspace_max_bound[1],
max(
self.workspace_min_bound[1],
position[1],
),
),
min(
self.workspace_max_bound[2],
max(
self.workspace_min_bound[2],
position[2],
),
),
)
# def restrict_servo_translation_to_workspace(
# self, translation: tuple[float, float, float]
# ) -> tuple[float, float, float]:
# current_ee_position = self.get_ee_position()
#
# translation = tuple(
# 0.0
# if (
# current_ee_position[i] > self.workspace_max_bound[i]
# and translation[i] > 0.0
# )
# or (
# current_ee_position[i] < self.workspace_min_bound[i]
# and translation[i] < 0.0
# )
# else translation[i]
# for i in range(3)
# )
#
# return translation
def get_ee_pose(
node: Node,
world: World,
robot_ee_link_name: str,
robot_name: str,
robot_arm_base_link_name: str,
tf2_listener: Tf2Listener,
) -> Pose | None:
"""
Return the current pose of the end effector with respect to arm base link.
"""
try:
robot_model = world.to_gazebo().get_model(robot_name).to_gazebo()
ee_position, ee_quat_xyzw = get_model_pose(
world=world,
model=robot_model,
link=robot_ee_link_name,
xyzw=True,
)
return transform_change_reference_frame_pose(
world=world,
position=ee_position,
quat=ee_quat_xyzw,
target_model=robot_model,
target_link=robot_arm_base_link_name,
xyzw=True,
)
except Exception as e:
node.get_logger().warn(
f"Cannot get end effector pose from Gazebo ({e}), using tf2..."
)
transform = tf2_listener.lookup_transform_sync(
source_frame=robot_ee_link_name,
target_frame=robot_arm_base_link_name,
retry=False,
)
if transform is not None:
return (
(
transform.translation.x,
transform.translation.y,
transform.translation.z,
),
(
transform.rotation.x,
transform.rotation.y,
transform.rotation.z,
transform.rotation.w,
),
)
else:
node.get_logger().error(
"Cannot get pose of the end effector (default values are returned)"
)
return (
(0.0, 0.0, 0.0),
(0.0, 0.0, 0.0, 1.0),
)
def get_ee_position(
world: World,
robot_name: str,
robot_ee_link_name: str,
robot_arm_base_link_name: str,
node: Node,
tf2_listener: Tf2Listener,
) -> Point:
"""
Return the current position of the end effector with respect to arm base link.
"""
try:
robot_model = world.to_gazebo().get_model(robot_name).to_gazebo()
ee_position = get_model_position(
world=world,
model=robot_model,
link=robot_ee_link_name,
)
return transform_change_reference_frame_position(
world=world,
position=ee_position,
target_model=robot_model,
target_link=robot_arm_base_link_name,
)
except Exception as e:
node.get_logger().debug(
f"Cannot get end effector position from Gazebo ({e}), using tf2..."
)
transform = tf2_listener.lookup_transform_sync(
source_frame=robot_ee_link_name,
target_frame=robot_arm_base_link_name,
retry=False,
)
if transform is not None:
return (
transform.translation.x,
transform.translation.y,
transform.translation.z,
)
else:
node.get_logger().error(
"Cannot get position of the end effector (default values are returned)"
)
return (0.0, 0.0, 0.0)
def get_ee_orientation(
world: World,
robot_name: str,
robot_ee_link_name: str,
robot_arm_base_link_name: str,
node: Node,
tf2_listener: Tf2Listener,
) -> Quat:
"""
Return the current xyzw quaternion of the end effector with respect to arm base link.
"""
try:
robot_model = world.to_gazebo().get_model(robot_name).to_gazebo()
ee_quat_xyzw = get_model_orientation(
world=world,
model=robot_model,
link=robot_ee_link_name,
xyzw=True,
)
return transform_change_reference_frame_orientation(
world=world,
quat=ee_quat_xyzw,
target_model=robot_model,
target_link=robot_arm_base_link_name,
xyzw=True,
)
except Exception as e:
node.get_logger().warn(
f"Cannot get end effector orientation from Gazebo ({e}), using tf2..."
)
transform = tf2_listener.lookup_transform_sync(
source_frame=robot_ee_link_name,
target_frame=robot_arm_base_link_name,
retry=False,
)
if transform is not None:
return (
transform.rotation.x,
transform.rotation.y,
transform.rotation.z,
transform.rotation.w,
)
else:
node.get_logger().error(
"Cannot get orientation of the end effector (default values are returned)"
)
return (0.0, 0.0, 0.0, 1.0)
def get_object_position(
object_model: ModelWrapper | str,
node: Node,
world: World,
robot_name: str,
robot_arm_base_link_name: str,
) -> Point:
"""
Return the current position of an object with respect to arm base link.
Note: Only simulated objects are currently supported.
"""
try:
object_position = get_model_position(
world=world,
model=object_model,
)
return transform_change_reference_frame_position(
world=world,
position=object_position,
target_model=robot_name,
target_link=robot_arm_base_link_name,
)
except Exception as e:
node.get_logger().error(
f"Cannot get position of {object_model} object (default values are returned): {e}"
)
return (0.0, 0.0, 0.0)
def get_object_positions(
node: Node,
world: World,
object_names: list[str],
robot_name: str,
robot_arm_base_link_name: str,
) -> dict[str, tuple[float, float, float]]:
"""
Return the current position of all objects with respect to arm base link.
Note: Only simulated objects are currently supported.
"""
object_positions = {}
try:
robot_model = world.to_gazebo().get_model(robot_name).to_gazebo()
robot_arm_base_link = robot_model.get_link(link_name=robot_arm_base_link_name)
for object_name in object_names:
object_position = get_model_position(
world=world,
model=object_name,
)
object_positions[object_name] = transform_change_reference_frame_position(
world=world,
position=object_position,
target_model=robot_model,
target_link=robot_arm_base_link,
)
except Exception as e:
node.get_logger().error(
f"Cannot get positions of all objects (empty Dict is returned): {e}"
)
return object_positions
# def substitute_special_frame(self, frame_id: str) -> str:
# if "arm_base_link" == frame_id:
# return self.robot_arm_base_link_name
# elif "base_link" == frame_id:
# return self.robot_base_link_name
# elif "end_effector" == frame_id:
# return self.robot_ee_link_name
# elif "world" == frame_id:
# try:
# # In Gazebo, where multiple worlds are allowed
# return self.world.to_gazebo().name()
# except Exception as e:
# self.get_logger().warn(f"")
# # Otherwise (e.g. real world)
# return "rbs_gym_world"
# else:
# return frame_id
def move_to_initial_joint_configuration(self):
pass
# self.moveit2.move_to_configuration(self.initial_arm_joint_positions)
# if (
# self.robot_model_class.CLOSED_GRIPPER_JOINT_POSITIONS
# == self.initial_gripper_joint_positions
# ):
# self.gripper.reset_close()
# else:
# self.gripper.reset_open()
def check_terrain_collision(
world: World,
robot_name: str,
terrain_name: str,
robot_base_link_name: str,
robot_arm_link_names: list[str],
robot_gripper_link_names: list[str],
) -> bool:
"""
Returns true if robot links are in collision with the ground.
"""
robot_name_len = len(robot_name)
terrain_model = world.get_model(terrain_name)
for contact in terrain_model.contacts():
body_b = contact.body_b
if body_b.startswith(robot_name) and len(body_b) > robot_name_len:
link = body_b[robot_name_len + 2 :]
if link != robot_base_link_name and (
link in robot_arm_link_names or link in robot_gripper_link_names
):
return True
return False
def check_all_objects_outside_workspace(
workspace: Pose,
object_positions: dict[str, Point],
) -> bool:
"""
Returns True if all objects are outside the workspace.
"""
return all(
[
check_object_outside_workspace(workspace, object_position)
for object_position in object_positions.values()
]
)
def check_object_outside_workspace(workspace: Pose, object_position: Point) -> bool:
"""
Returns True if the object is outside the workspace.
"""
workspace_min_bound, workspace_max_bound = workspace
return any(
coord < min_bound or coord > max_bound
for coord, min_bound, max_bound in zip(
object_position, workspace_min_bound, workspace_max_bound
)
)
# def add_parameter_overrides(self, parameter_overrides: Dict[str, any]):
# self.add_task_parameter_overrides(parameter_overrides)
# self.add_randomizer_parameter_overrides(parameter_overrides)
#
#
# def add_task_parameter_overrides(self, parameter_overrides: Dict[str, any]):
# self.__task_parameter_overrides.update(parameter_overrides)
#
#
# def add_randomizer_parameter_overrides(self, parameter_overrides: Dict[str, any]):
# self._randomizer_parameter_overrides.update(parameter_overrides)
#
#
# def __consume_parameter_overrides(self):
# for key, value in self.__task_parameter_overrides.items():
# if hasattr(self, key):
# setattr(self, key, value)
# elif hasattr(self, f"_{key}"):
# setattr(self, f"_{key}", value)
# elif hasattr(self, f"__{key}"):
# setattr(self, f"__{key}", value)
# else:
# self.get_logger().error(f"Override '{key}' is not supperted by the task.")
#
# self.__task_parameter_overrides.clear()

View file

@ -1,25 +0,0 @@
from typing import Union
from gymnasium import logger as gym_logger
from gym_gz.utils import logger as gym_ign_logger
def set_log_level(log_level: Union[int, str]):
"""
Set log level for (Gym) Ignition.
"""
if not isinstance(log_level, int):
log_level = str(log_level).upper()
if "WARNING" == log_level:
log_level = "WARN"
elif not log_level in ["DEBUG", "INFO", "WARN", "ERROR", "DISABLED"]:
log_level = "DISABLED"
log_level = getattr(gym_logger, log_level)
gym_ign_logger.set_level(
level=log_level,
scenario_level=log_level,
)

View file

@ -1,44 +0,0 @@
import numpy as np
def quat_mul(
quat_0: tuple[float, float, float, float],
quat_1: tuple[float, float, float, float],
xyzw: bool = True,
) -> tuple[float, float, float, float]:
"""
Multiply two quaternions
"""
if xyzw:
x0, y0, z0, w0 = quat_0
x1, y1, z1, w1 = quat_1
return (
x1 * w0 + y1 * z0 - z1 * y0 + w1 * x0,
-x1 * z0 + y1 * w0 + z1 * x0 + w1 * y0,
x1 * y0 - y1 * x0 + z1 * w0 + w1 * z0,
-x1 * x0 - y1 * y0 - z1 * z0 + w1 * w0,
)
else:
w0, x0, y0, z0 = quat_0
w1, x1, y1, z1 = quat_1
return (
-x1 * x0 - y1 * y0 - z1 * z0 + w1 * w0,
x1 * w0 + y1 * z0 - z1 * y0 + w1 * x0,
-x1 * z0 + y1 * w0 + z1 * x0 + w1 * y0,
x1 * y0 - y1 * x0 + z1 * w0 + w1 * z0,
)
def distance_to_nearest_point(
origin: tuple[float, float, float], points: list[tuple[float, float, float]]
) -> float:
return np.linalg.norm(np.array(points) - np.array(origin), axis=1).min()
def get_nearest_point(
origin: tuple[float, float, float], points: list[tuple[float, float, float]]
) -> tuple[float, float, float]:
target_distances = np.linalg.norm(np.array(points) - np.array(origin), axis=1)
return points[target_distances.argmin()]

View file

@ -1,74 +0,0 @@
import sys
from typing import Tuple
import rclpy
from geometry_msgs.msg import TransformStamped
from rclpy.node import Node
from rclpy.parameter import Parameter
from tf2_ros import StaticTransformBroadcaster
class Tf2Broadcaster:
def __init__(
self,
node: Node,
):
self._node = node
self.__tf2_broadcaster = StaticTransformBroadcaster(node=self._node)
self._transform_stamped = TransformStamped()
def broadcast_tf(
self,
parent_frame_id: str,
child_frame_id: str,
translation: Tuple[float, float, float],
rotation: Tuple[float, float, float, float],
xyzw: bool = True,
):
"""
Broadcast transformation of the camera
"""
self._transform_stamped.header.frame_id = parent_frame_id
self._transform_stamped.child_frame_id = child_frame_id
self._transform_stamped.header.stamp = self._node.get_clock().now().to_msg()
self._transform_stamped.transform.translation.x = float(translation[0])
self._transform_stamped.transform.translation.y = float(translation[1])
self._transform_stamped.transform.translation.z = float(translation[2])
if xyzw:
self._transform_stamped.transform.rotation.x = float(rotation[0])
self._transform_stamped.transform.rotation.y = float(rotation[1])
self._transform_stamped.transform.rotation.z = float(rotation[2])
self._transform_stamped.transform.rotation.w = float(rotation[3])
else:
self._transform_stamped.transform.rotation.w = float(rotation[0])
self._transform_stamped.transform.rotation.x = float(rotation[1])
self._transform_stamped.transform.rotation.y = float(rotation[2])
self._transform_stamped.transform.rotation.z = float(rotation[3])
self.__tf2_broadcaster.sendTransform(self._transform_stamped)
class Tf2BroadcasterStandalone(Node, Tf2Broadcaster):
def __init__(
self,
node_name: str = "env_manager_tf_broadcaster",
use_sim_time: bool = True,
):
try:
rclpy.init()
except Exception as e:
if not rclpy.ok():
sys.exit(f"ROS 2 context could not be initialised: {e}")
Node.__init__(self, node_name)
self.set_parameters(
[Parameter("use_sim_time", type_=Parameter.Type.BOOL, value=use_sim_time)]
)
Tf2Broadcaster.__init__(self, node=self)

View file

@ -1,74 +0,0 @@
import sys
from typing import Tuple
import rclpy
from geometry_msgs.msg import TransformStamped
from rclpy.node import Node
from rclpy.parameter import Parameter
from tf2_ros import TransformBroadcaster
class Tf2DynamicBroadcaster:
def __init__(
self,
node: Node,
):
self._node = node
self.__tf2_broadcaster = TransformBroadcaster(node=self._node)
self._transform_stamped = TransformStamped()
def broadcast_tf(
self,
parent_frame_id: str,
child_frame_id: str,
translation: Tuple[float, float, float],
rotation: Tuple[float, float, float, float],
xyzw: bool = True,
):
"""
Broadcast transformation of the camera
"""
self._transform_stamped.header.frame_id = parent_frame_id
self._transform_stamped.child_frame_id = child_frame_id
self._transform_stamped.header.stamp = self._node.get_clock().now().to_msg()
self._transform_stamped.transform.translation.x = float(translation[0])
self._transform_stamped.transform.translation.y = float(translation[1])
self._transform_stamped.transform.translation.z = float(translation[2])
if xyzw:
self._transform_stamped.transform.rotation.x = float(rotation[0])
self._transform_stamped.transform.rotation.y = float(rotation[1])
self._transform_stamped.transform.rotation.z = float(rotation[2])
self._transform_stamped.transform.rotation.w = float(rotation[3])
else:
self._transform_stamped.transform.rotation.w = float(rotation[0])
self._transform_stamped.transform.rotation.x = float(rotation[1])
self._transform_stamped.transform.rotation.y = float(rotation[2])
self._transform_stamped.transform.rotation.z = float(rotation[3])
self.__tf2_broadcaster.sendTransform(self._transform_stamped)
class Tf2BroadcasterStandalone(Node, Tf2DynamicBroadcaster):
def __init__(
self,
node_name: str = "env_manager_tf_broadcaster",
use_sim_time: bool = True,
):
try:
rclpy.init()
except Exception as e:
if not rclpy.ok():
sys.exit(f"ROS 2 context could not be initialised: {e}")
Node.__init__(self, node_name)
self.set_parameters(
[Parameter("use_sim_time", type_=Parameter.Type.BOOL, value=use_sim_time)]
)
Tf2DynamicBroadcaster.__init__(self, node=self)

View file

@ -1,74 +0,0 @@
import sys
from typing import Optional
import rclpy
from geometry_msgs.msg import Transform
from rclpy.node import Node
from rclpy.parameter import Parameter
from tf2_ros import Buffer, TransformListener
class Tf2Listener:
def __init__(
self,
node: Node,
):
self._node = node
# Create tf2 buffer and listener for transform lookup
self.__tf2_buffer = Buffer()
TransformListener(buffer=self.__tf2_buffer, node=node)
def lookup_transform_sync(
self, target_frame: str, source_frame: str, retry: bool = True
) -> Optional[Transform]:
try:
return self.__tf2_buffer.lookup_transform(
target_frame=target_frame,
source_frame=source_frame,
time=rclpy.time.Time(),
).transform
except:
if retry:
while rclpy.ok():
if self.__tf2_buffer.can_transform(
target_frame=target_frame,
source_frame=source_frame,
time=rclpy.time.Time(),
timeout=rclpy.time.Duration(seconds=1, nanoseconds=0),
):
return self.__tf2_buffer.lookup_transform(
target_frame=target_frame,
source_frame=source_frame,
time=rclpy.time.Time(),
).transform
self._node.get_logger().warn(
f'Lookup of transform from "{source_frame}"'
f' to "{target_frame}" failed, retrying...'
)
else:
return None
class Tf2ListenerStandalone(Node, Tf2Listener):
def __init__(
self,
node_name: str = "rbs_gym_tf_listener",
use_sim_time: bool = True,
):
try:
rclpy.init()
except Exception as e:
if not rclpy.ok():
sys.exit(f"ROS 2 context could not be initialised: {e}")
Node.__init__(self, node_name)
self.set_parameters(
[Parameter("use_sim_time", type_=Parameter.Type.BOOL, value=use_sim_time)]
)
Tf2Listener.__init__(self, node=self)

View file

@ -1,5 +0,0 @@
Point = tuple[float, float, float]
Quat = tuple[float, float, float, float]
Rpy = tuple[float, float, float]
Pose = tuple[Point, Quat]
PoseRpy = tuple[Point, Rpy]

View file

@ -1,18 +0,0 @@
<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>env_manager</name>
<version>0.0.0</version>
<description>TODO: Package description</description>
<maintainer email="ur.narmak@gmail.com">narmak</maintainer>
<license>Apache-2.0</license>
<test_depend>ament_copyright</test_depend>
<test_depend>ament_flake8</test_depend>
<test_depend>ament_pep257</test_depend>
<test_depend>python3-pytest</test_depend>
<export>
<build_type>ament_python</build_type>
</export>
</package>

View file

@ -1,4 +0,0 @@
[develop]
script_dir=$base/lib/env_manager
[install]
install_scripts=$base/lib/env_manager

View file

@ -1,25 +0,0 @@
from setuptools import find_packages, setup
package_name = 'env_manager'
setup(
name=package_name,
version='0.0.0',
packages=find_packages(exclude=['test']),
data_files=[
('share/ament_index/resource_index/packages',
['resource/' + package_name]),
('share/' + package_name, ['package.xml']),
],
install_requires=['setuptools', 'trimesh'],
zip_safe=True,
maintainer='narmak',
maintainer_email='ur.narmak@gmail.com',
description='TODO: Package description',
license='Apache-2.0',
tests_require=['pytest'],
entry_points={
'console_scripts': [
],
},
)

View file

@ -1,25 +0,0 @@
# Copyright 2015 Open Source Robotics Foundation, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ament_copyright.main import main
import pytest
# Remove the `skip` decorator once the source file(s) have a copyright header
@pytest.mark.skip(reason='No copyright header has been placed in the generated source file.')
@pytest.mark.copyright
@pytest.mark.linter
def test_copyright():
rc = main(argv=['.', 'test'])
assert rc == 0, 'Found errors'

View file

@ -1,25 +0,0 @@
# Copyright 2017 Open Source Robotics Foundation, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ament_flake8.main import main_with_errors
import pytest
@pytest.mark.flake8
@pytest.mark.linter
def test_flake8():
rc, errors = main_with_errors(argv=[])
assert rc == 0, \
'Found %d code style errors / warnings:\n' % len(errors) + \
'\n'.join(errors)

View file

@ -1,23 +0,0 @@
# Copyright 2015 Open Source Robotics Foundation, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ament_pep257.main import main
import pytest
@pytest.mark.linter
@pytest.mark.pep257
def test_pep257():
rc = main(argv=['.', 'test'])
assert rc == 0, 'Found code style errors / warnings'

View file

@ -1,38 +0,0 @@
cmake_minimum_required(VERSION 3.8)
project(env_manager_interfaces)
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
add_compile_options(-Wall -Wextra -Wpedantic)
endif()
# find dependencies
find_package(ament_cmake REQUIRED)
find_package(lifecycle_msgs REQUIRED)
find_package(rosidl_default_generators REQUIRED)
# uncomment the following section in order to fill in
# further dependencies manually.
# find_package(<dependency> REQUIRED)
rosidl_generate_interfaces(${PROJECT_NAME}
srv/StartEnv.srv
srv/ConfigureEnv.srv
srv/LoadEnv.srv
srv/UnloadEnv.srv
srv/ResetEnv.srv
msg/EnvState.msg
DEPENDENCIES lifecycle_msgs
)
if(BUILD_TESTING)
find_package(ament_lint_auto REQUIRED)
# the following line skips the linter which checks for copyrights
# comment the line when a copyright and license is added to all source files
set(ament_cmake_copyright_FOUND TRUE)
# the following line skips cpplint (only works in a git repo)
# comment the line when this package is in a git repo and when
# a copyright and license is added to all source files
set(ament_cmake_cpplint_FOUND TRUE)
ament_lint_auto_find_test_dependencies()
endif()
ament_package()

View file

@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View file

@ -1,4 +0,0 @@
string name
string type
string plugin_name
lifecycle_msgs/State state

View file

@ -1,27 +0,0 @@
<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>env_manager_interfaces</name>
<version>0.0.0</version>
<description>TODO: Package description</description>
<maintainer email="ur.narmak@gmail.com">solid-sinusoid</maintainer>
<license>Apache-2.0</license>
<buildtool_depend>ament_cmake</buildtool_depend>
<buildtool_depend>rosidl_default_generators</buildtool_depend>
<build_depend>builtin_interfaces</build_depend>
<build_depend>lifecycle_msgs</build_depend>
<exec_depend>builtin_interfaces</exec_depend>
<exec_depend>lifecycle_msgs</exec_depend>
<exec_depend>rosidl_default_runtime</exec_depend>
<test_depend>ament_lint_auto</test_depend>
<test_depend>ament_lint_common</test_depend>
<member_of_group>rosidl_interface_packages</member_of_group>
<export>
<build_type>ament_cmake</build_type>
</export>
</package>

View file

@ -1,3 +0,0 @@
string name
---
bool ok

View file

@ -1,4 +0,0 @@
string name
string type
---
bool ok

View file

@ -1,3 +0,0 @@
---
bool ok

View file

@ -1,4 +0,0 @@
string name
string type
---
bool ok

View file

@ -1,3 +0,0 @@
string name
---
bool ok

View file

@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View file

@ -1,42 +0,0 @@
# Reach
Reach-Gazebo-v0:
policy: "MlpPolicy"
policy_kwargs:
n_critics: 2
net_arch: [128, 64]
n_timesteps: 200000
buffer_size: 25000
learning_starts: 5000
batch_size: 512
learning_rate: lin_0.0002
gamma: 0.95
tau: 0.001
ent_coef: "auto_0.1"
target_entropy: "auto"
train_freq: [1, "episode"]
gradient_steps: 100
noise_type: "normal"
noise_std: 0.025
use_sde: False
optimize_memory_usage: False
Reach-ColorImage-Gazebo-v0:
policy: "CnnPolicy"
policy_kwargs:
n_critics: 2
net_arch: [128, 128]
n_timesteps: 50000
buffer_size: 25000
learning_starts: 5000
batch_size: 32
learning_rate: lin_0.0002
gamma: 0.95
tau: 0.0005
ent_coef: "auto_0.1"
target_entropy: "auto"
train_freq: [1, "episode"]
gradient_steps: 100
noise_type: "normal"
noise_std: 0.025
use_sde: False
optimize_memory_usage: False

View file

@ -1,39 +0,0 @@
Reach-Gazebo-v0:
policy: "MlpPolicy"
policy_kwargs:
n_critics: 2
net_arch: [128, 64]
n_timesteps: 200000
buffer_size: 25000
learning_starts: 5000
batch_size: 512
learning_rate: lin_0.0002
gamma: 0.95
tau: 0.001
train_freq: [1, "episode"]
gradient_steps: 100
target_policy_noise: 0.1
target_noise_clip: 0.2
noise_type: "normal"
noise_std: 0.025
optimize_memory_usage: False
Reach-ColorImage-Gazebo-v0:
policy: "CnnPolicy"
policy_kwargs:
n_critics: 2
net_arch: [128, 128]
n_timesteps: 50000
buffer_size: 25000
learning_starts: 5000
batch_size: 32
learning_rate: lin_0.0002
gamma: 0.95
tau: 0.0005
train_freq: [1, "episode"]
gradient_steps: 100
target_policy_noise: 0.1
target_noise_clip: 0.2
noise_type: "normal"
noise_std: 0.025
optimize_memory_usage: True

View file

@ -1,46 +0,0 @@
# Reach
Reach-Gazebo-v0:
policy: "MlpPolicy"
policy_kwargs:
n_quantiles: 25
n_critics: 2
net_arch: [128, 64]
n_timesteps: 200000
buffer_size: 25000
learning_starts: 5000
batch_size: 512
learning_rate: lin_0.0002
gamma: 0.95
tau: 0.001
ent_coef: "auto_0.1"
target_entropy: "auto"
top_quantiles_to_drop_per_net: 2
train_freq: [1, "episode"]
gradient_steps: 100
noise_type: "normal"
noise_std: 0.025
use_sde: False
optimize_memory_usage: False
Reach-ColorImage-Gazebo-v0:
policy: "CnnPolicy"
policy_kwargs:
n_quantiles: 25
n_critics: 2
net_arch: [128, 128]
n_timesteps: 50000
buffer_size: 25000
learning_starts: 5000
batch_size: 32
learning_rate: lin_0.0002
gamma: 0.95
tau: 0.0005
ent_coef: "auto_0.1"
target_entropy: "auto"
top_quantiles_to_drop_per_net: 2
train_freq: [1, "episode"]
gradient_steps: 100
noise_type: "normal"
noise_std: 0.025
use_sde: False
optimize_memory_usage: True

View file

@ -1,426 +0,0 @@
from launch import LaunchContext, LaunchDescription
from launch.actions import (
DeclareLaunchArgument,
IncludeLaunchDescription,
OpaqueFunction,
SetEnvironmentVariable,
TimerAction
)
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch.substitutions import LaunchConfiguration, PathJoinSubstitution
from launch_ros.substitutions import FindPackageShare
from launch_ros.actions import Node
import os
from os import cpu_count
from ament_index_python.packages import get_package_share_directory
def launch_setup(context, *args, **kwargs):
# Initialize Arguments
robot_type = LaunchConfiguration("robot_type")
# General arguments
with_gripper_condition = LaunchConfiguration("with_gripper")
controllers_file = LaunchConfiguration("controllers_file")
cartesian_controllers = LaunchConfiguration("cartesian_controllers")
description_package = LaunchConfiguration("description_package")
description_file = LaunchConfiguration("description_file")
robot_name = LaunchConfiguration("robot_name")
start_joint_controller = LaunchConfiguration("start_joint_controller")
initial_joint_controller = LaunchConfiguration("initial_joint_controller")
launch_simulation = LaunchConfiguration("launch_sim")
launch_moveit = LaunchConfiguration("launch_moveit")
launch_task_planner = LaunchConfiguration("launch_task_planner")
launch_perception = LaunchConfiguration("launch_perception")
moveit_config_package = LaunchConfiguration("moveit_config_package")
moveit_config_file = LaunchConfiguration("moveit_config_file")
use_sim_time = LaunchConfiguration("use_sim_time")
sim_gazebo = LaunchConfiguration("sim_gazebo")
hardware = LaunchConfiguration("hardware")
env_manager = LaunchConfiguration("env_manager")
launch_controllers = LaunchConfiguration("launch_controllers")
gazebo_gui = LaunchConfiguration("gazebo_gui")
gripper_name = LaunchConfiguration("gripper_name")
# training arguments
env = LaunchConfiguration("env")
algo = LaunchConfiguration("algo")
num_threads = LaunchConfiguration("num_threads")
seed = LaunchConfiguration("seed")
log_folder = LaunchConfiguration("log_folder")
verbose = LaunchConfiguration("verbose")
# use_sim_time = LaunchConfiguration("use_sim_time")
log_level = LaunchConfiguration("log_level")
env_kwargs = LaunchConfiguration("env_kwargs")
n_episodes = LaunchConfiguration("n_episodes")
exp_id = LaunchConfiguration("exp_id")
load_best = LaunchConfiguration("load_best")
load_checkpoint = LaunchConfiguration("load_checkpoint")
stochastic = LaunchConfiguration("stochastic")
reward_log = LaunchConfiguration("reward_log")
norm_reward = LaunchConfiguration("norm_reward")
no_render = LaunchConfiguration("no_render")
sim_gazebo = LaunchConfiguration("sim_gazebo")
launch_simulation = LaunchConfiguration("launch_sim")
initial_joint_controllers_file_path = os.path.join(
get_package_share_directory('rbs_arm'), 'config', 'controllers.yaml'
)
single_robot_setup = IncludeLaunchDescription(
PythonLaunchDescriptionSource([
PathJoinSubstitution([
FindPackageShare('rbs_bringup'),
"launch",
"rbs_robot.launch.py"
])
]),
launch_arguments={
"env_manager": env_manager,
"with_gripper": with_gripper_condition,
"gripper_name": gripper_name,
# "controllers_file": controllers_file,
"robot_type": robot_type,
"controllers_file": initial_joint_controllers_file_path,
"cartesian_controllers": cartesian_controllers,
"description_package": description_package,
"description_file": description_file,
"robot_name": robot_name,
"start_joint_controller": start_joint_controller,
"initial_joint_controller": initial_joint_controller,
"launch_simulation": launch_simulation,
"launch_moveit": launch_moveit,
"launch_task_planner": launch_task_planner,
"launch_perception": launch_perception,
"moveit_config_package": moveit_config_package,
"moveit_config_file": moveit_config_file,
"use_sim_time": use_sim_time,
"sim_gazebo": sim_gazebo,
"hardware": hardware,
"launch_controllers": launch_controllers,
# "gazebo_gui": gazebo_gui
}.items()
)
args = [
"--env",
env,
"--env-kwargs",
env_kwargs,
"--algo",
algo,
"--seed",
seed,
"--num-threads",
num_threads,
"--n-episodes",
n_episodes,
"--log-folder",
log_folder,
"--exp-id",
exp_id,
"--load-best",
load_best,
"--load-checkpoint",
load_checkpoint,
"--stochastic",
stochastic,
"--reward-log",
reward_log,
"--norm-reward",
norm_reward,
"--no-render",
no_render,
"--verbose",
verbose,
"--ros-args",
"--log-level",
log_level,
]
rl_task = Node(
package="rbs_gym",
executable="evaluate",
output="log",
arguments=args,
parameters=[{"use_sim_time": use_sim_time}],
)
delay_robot_control_stack = TimerAction(
period=10.0,
actions=[single_robot_setup]
)
nodes_to_start = [
# env,
rl_task,
delay_robot_control_stack
]
return nodes_to_start
def generate_launch_description():
declared_arguments = []
declared_arguments.append(
DeclareLaunchArgument(
"robot_type",
description="Type of robot by name",
choices=["rbs_arm","ur3", "ur3e", "ur5", "ur5e", "ur10", "ur10e", "ur16e"],
default_value="rbs_arm",
)
)
# General arguments
declared_arguments.append(
DeclareLaunchArgument(
"controllers_file",
default_value="controllers_gazebosim.yaml",
description="YAML file with the controllers configuration.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"description_package",
default_value="rbs_arm",
description="Description package with robot URDF/XACRO files. Usually the argument \
is not set, it enables use of a custom description.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"description_file",
default_value="rbs_arm_modular.xacro",
description="URDF/XACRO description file with the robot.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"robot_name",
default_value="arm0",
description="Name for robot, used to apply namespace for specific robot in multirobot setup",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"start_joint_controller",
default_value="false",
description="Enable headless mode for robot control",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"initial_joint_controller",
default_value="joint_trajectory_controller",
description="Robot controller to start.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"moveit_config_package",
default_value="rbs_arm",
description="MoveIt config package with robot SRDF/XACRO files. Usually the argument \
is not set, it enables use of a custom moveit config.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"moveit_config_file",
default_value="rbs_arm.srdf.xacro",
description="MoveIt SRDF/XACRO description file with the robot.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"use_sim_time",
default_value="true",
description="Make MoveIt to use simulation time.\
This is needed for the trajectory planing in simulation.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"gripper_name",
default_value="rbs_gripper",
choices=["rbs_gripper", ""],
description="choose gripper by name (leave empty if hasn't)",
)
)
declared_arguments.append(
DeclareLaunchArgument("with_gripper",
default_value="true",
description="With gripper or not?")
)
declared_arguments.append(
DeclareLaunchArgument("sim_gazebo",
default_value="true",
description="Gazebo Simulation")
)
declared_arguments.append(
DeclareLaunchArgument("env_manager",
default_value="false",
description="Launch env_manager?")
)
declared_arguments.append(
DeclareLaunchArgument("launch_sim",
default_value="true",
description="Launch simulator (Gazebo)?\
Most general arg")
)
declared_arguments.append(
DeclareLaunchArgument("launch_moveit",
default_value="false",
description="Launch moveit?")
)
declared_arguments.append(
DeclareLaunchArgument("launch_perception",
default_value="false",
description="Launch perception?")
)
declared_arguments.append(
DeclareLaunchArgument("launch_task_planner",
default_value="false",
description="Launch task_planner?")
)
declared_arguments.append(
DeclareLaunchArgument("cartesian_controllers",
default_value="true",
description="Load cartesian\
controllers?")
)
declared_arguments.append(
DeclareLaunchArgument("hardware",
choices=["gazebo", "mock"],
default_value="gazebo",
description="Choose your harware_interface")
)
declared_arguments.append(
DeclareLaunchArgument("launch_controllers",
default_value="true",
description="Launch controllers?")
)
declared_arguments.append(
DeclareLaunchArgument("gazebo_gui",
default_value="true",
description="Launch gazebo with gui?")
)
# training arguments
declared_arguments.append(
DeclareLaunchArgument(
"env",
default_value="Reach-Gazebo-v0",
description="Environment ID",
))
declared_arguments.append(
DeclareLaunchArgument(
"env_kwargs",
default_value="",
description="Optional keyword argument to pass to the env constructor.",
))
declared_arguments.append(
DeclareLaunchArgument(
"vec_env",
default_value="dummy",
description="Type of VecEnv to use (dummy or subproc).",
))
# Algorithm and training
declared_arguments.append(
DeclareLaunchArgument(
"algo",
default_value="sac",
description="RL algorithm to use during the training.",
))
declared_arguments.append(
DeclareLaunchArgument(
"num_threads",
default_value="-1",
description="Number of threads for PyTorch (-1 to use default).",
))
# Random seed
declared_arguments.append(
DeclareLaunchArgument(
"seed",
default_value="84",
description="Random generator seed.",
))
# Logging
declared_arguments.append(
DeclareLaunchArgument(
"log_folder",
default_value="logs",
description="Path to the log directory.",
))
# Verbosity
declared_arguments.append(
DeclareLaunchArgument(
"verbose",
default_value="1",
description="Verbose mode (0: no output, 1: INFO).",
))
# HER specifics
declared_arguments.append(
DeclareLaunchArgument(
"log_level",
default_value="error",
description="The level of logging that is applied to all ROS 2 nodes launched by this script.",
))
declared_arguments.append(
DeclareLaunchArgument(
"n_episodes",
default_value="200",
description="Number of evaluation episodes.",
))
declared_arguments.append(
DeclareLaunchArgument(
"exp_id",
default_value="0",
description="Experiment ID (default: 0: latest, -1: no exp folder).",
))
declared_arguments.append(
DeclareLaunchArgument(
"load_best",
default_value="false",
description="Load best model instead of last model if available.",
))
declared_arguments.append(
DeclareLaunchArgument(
"load_checkpoint",
default_value="0",
description="Load checkpoint instead of last model if available, you must pass the number of timesteps corresponding to it.",
))
declared_arguments.append(
DeclareLaunchArgument(
"stochastic",
default_value="false",
description="Use stochastic actions instead of deterministic.",
))
declared_arguments.append(
DeclareLaunchArgument(
"reward_log",
default_value="reward_logs",
description="Where to log reward.",
))
declared_arguments.append(
DeclareLaunchArgument(
"norm_reward",
default_value="false",
description="Normalize reward if applicable (trained with VecNormalize)",
))
declared_arguments.append(
DeclareLaunchArgument(
"no_render",
default_value="true",
description="Do not render the environment (useful for tests).",
))
env_variables = [
SetEnvironmentVariable(name="OMP_DYNAMIC", value="TRUE"),
SetEnvironmentVariable(name="OMP_NUM_THREADS", value=str(cpu_count() // 2))
]
return LaunchDescription(declared_arguments + [OpaqueFunction(function=launch_setup)] + env_variables)

View file

@ -1,519 +0,0 @@
from launch import LaunchDescription
from launch.actions import (
DeclareLaunchArgument,
IncludeLaunchDescription,
OpaqueFunction,
TimerAction
)
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch.substitutions import LaunchConfiguration, PathJoinSubstitution
from launch_ros.substitutions import FindPackageShare
from launch_ros.actions import Node
import os
from os import cpu_count
from ament_index_python.packages import get_package_share_directory
def launch_setup(context, *args, **kwargs):
# Initialize Arguments
robot_type = LaunchConfiguration("robot_type")
# General arguments
with_gripper_condition = LaunchConfiguration("with_gripper")
controllers_file = LaunchConfiguration("controllers_file")
cartesian_controllers = LaunchConfiguration("cartesian_controllers")
description_package = LaunchConfiguration("description_package")
description_file = LaunchConfiguration("description_file")
robot_name = LaunchConfiguration("robot_name")
start_joint_controller = LaunchConfiguration("start_joint_controller")
initial_joint_controller = LaunchConfiguration("initial_joint_controller")
launch_simulation = LaunchConfiguration("launch_sim")
launch_moveit = LaunchConfiguration("launch_moveit")
launch_task_planner = LaunchConfiguration("launch_task_planner")
launch_perception = LaunchConfiguration("launch_perception")
moveit_config_package = LaunchConfiguration("moveit_config_package")
moveit_config_file = LaunchConfiguration("moveit_config_file")
use_sim_time = LaunchConfiguration("use_sim_time")
sim_gazebo = LaunchConfiguration("sim_gazebo")
hardware = LaunchConfiguration("hardware")
env_manager = LaunchConfiguration("env_manager")
launch_controllers = LaunchConfiguration("launch_controllers")
gripper_name = LaunchConfiguration("gripper_name")
# training arguments
env = LaunchConfiguration("env")
env_kwargs = LaunchConfiguration("env_kwargs")
algo = LaunchConfiguration("algo")
hyperparams = LaunchConfiguration("hyperparams")
n_timesteps = LaunchConfiguration("n_timesteps")
num_threads = LaunchConfiguration("num_threads")
seed = LaunchConfiguration("seed")
preload_replay_buffer = LaunchConfiguration("preload_replay_buffer")
log_folder = LaunchConfiguration("log_folder")
tensorboard_log = LaunchConfiguration("tensorboard_log")
log_interval = LaunchConfiguration("log_interval")
uuid = LaunchConfiguration("uuid")
eval_episodes = LaunchConfiguration("eval_episodes")
verbose = LaunchConfiguration("verbose")
truncate_last_trajectory = LaunchConfiguration("truncate_last_trajectory")
use_sim_time = LaunchConfiguration("use_sim_time")
log_level = LaunchConfiguration("log_level")
sampler = LaunchConfiguration("sampler")
pruner = LaunchConfiguration("pruner")
n_trials = LaunchConfiguration("n_trials")
n_startup_trials = LaunchConfiguration("n_startup_trials")
n_evaluations = LaunchConfiguration("n_evaluations")
n_jobs = LaunchConfiguration("n_jobs")
storage = LaunchConfiguration("storage")
study_name = LaunchConfiguration("study_name")
sim_gazebo = LaunchConfiguration("sim_gazebo")
launch_simulation = LaunchConfiguration("launch_sim")
initial_joint_controllers_file_path = os.path.join(
get_package_share_directory('rbs_arm'), 'config', 'controllers.yaml'
)
single_robot_setup = IncludeLaunchDescription(
PythonLaunchDescriptionSource([
PathJoinSubstitution([
FindPackageShare('rbs_bringup'),
"launch",
"rbs_robot.launch.py"
])
]),
launch_arguments={
"env_manager": env_manager,
"with_gripper": with_gripper_condition,
"gripper_name": gripper_name,
"controllers_file": controllers_file,
"robot_type": robot_type,
"controllers_file": initial_joint_controllers_file_path,
"cartesian_controllers": cartesian_controllers,
"description_package": description_package,
"description_file": description_file,
"robot_name": robot_name,
"start_joint_controller": start_joint_controller,
"initial_joint_controller": initial_joint_controller,
"launch_simulation": launch_simulation,
"launch_moveit": launch_moveit,
"launch_task_planner": launch_task_planner,
"launch_perception": launch_perception,
"moveit_config_package": moveit_config_package,
"moveit_config_file": moveit_config_file,
"use_sim_time": use_sim_time,
"sim_gazebo": sim_gazebo,
"hardware": hardware,
"launch_controllers": launch_controllers,
# "gazebo_gui": gazebo_gui
}.items()
)
args = [
"--env",
env,
"--env-kwargs",
env_kwargs,
"--algo",
algo,
"--seed",
seed,
"--num-threads",
num_threads,
"--n-timesteps",
n_timesteps,
"--preload-replay-buffer",
preload_replay_buffer,
"--log-folder",
log_folder,
"--tensorboard-log",
tensorboard_log,
"--log-interval",
log_interval,
"--uuid",
uuid,
"--optimize-hyperparameters",
"True",
"--sampler",
sampler,
"--pruner",
pruner,
"--n-trials",
n_trials,
"--n-startup-trials",
n_startup_trials,
"--n-evaluations",
n_evaluations,
"--n-jobs",
n_jobs,
"--storage",
storage,
"--study-name",
study_name,
"--eval-episodes",
eval_episodes,
"--verbose",
verbose,
"--truncate-last-trajectory",
truncate_last_trajectory,
"--ros-args",
"--log-level",
log_level,
]
rl_task = Node(
package="rbs_gym",
executable="train",
output="log",
arguments = args,
parameters=[{"use_sim_time": True}]
)
delay_robot_control_stack = TimerAction(
period=10.0,
actions=[single_robot_setup]
)
nodes_to_start = [
rl_task,
delay_robot_control_stack
]
return nodes_to_start
def generate_launch_description():
declared_arguments = []
declared_arguments.append(
DeclareLaunchArgument(
"robot_type",
description="Type of robot by name",
choices=["rbs_arm","ur3", "ur3e", "ur5", "ur5e", "ur10", "ur10e", "ur16e"],
default_value="rbs_arm",
)
)
# General arguments
declared_arguments.append(
DeclareLaunchArgument(
"controllers_file",
default_value="controllers_gazebosim.yaml",
description="YAML file with the controllers configuration.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"description_package",
default_value="rbs_arm",
description="Description package with robot URDF/XACRO files. Usually the argument \
is not set, it enables use of a custom description.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"description_file",
default_value="rbs_arm_modular.xacro",
description="URDF/XACRO description file with the robot.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"robot_name",
default_value="arm0",
description="Name for robot, used to apply namespace for specific robot in multirobot setup",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"start_joint_controller",
default_value="false",
description="Enable headless mode for robot control",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"initial_joint_controller",
default_value="joint_trajectory_controller",
description="Robot controller to start.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"moveit_config_package",
default_value="rbs_arm",
description="MoveIt config package with robot SRDF/XACRO files. Usually the argument \
is not set, it enables use of a custom moveit config.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"moveit_config_file",
default_value="rbs_arm.srdf.xacro",
description="MoveIt SRDF/XACRO description file with the robot.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"use_sim_time",
default_value="true",
description="Make MoveIt to use simulation time.\
This is needed for the trajectory planing in simulation.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"gripper_name",
default_value="rbs_gripper",
choices=["rbs_gripper", ""],
description="choose gripper by name (leave empty if hasn't)",
)
)
declared_arguments.append(
DeclareLaunchArgument("with_gripper",
default_value="true",
description="With gripper or not?")
)
declared_arguments.append(
DeclareLaunchArgument("sim_gazebo",
default_value="true",
description="Gazebo Simulation")
)
declared_arguments.append(
DeclareLaunchArgument("env_manager",
default_value="false",
description="Launch env_manager?")
)
declared_arguments.append(
DeclareLaunchArgument("launch_sim",
default_value="true",
description="Launch simulator (Gazebo)?\
Most general arg")
)
declared_arguments.append(
DeclareLaunchArgument("launch_moveit",
default_value="false",
description="Launch moveit?")
)
declared_arguments.append(
DeclareLaunchArgument("launch_perception",
default_value="false",
description="Launch perception?")
)
declared_arguments.append(
DeclareLaunchArgument("launch_task_planner",
default_value="false",
description="Launch task_planner?")
)
declared_arguments.append(
DeclareLaunchArgument("cartesian_controllers",
default_value="true",
description="Load cartesian\
controllers?")
)
declared_arguments.append(
DeclareLaunchArgument("hardware",
choices=["gazebo", "mock"],
default_value="gazebo",
description="Choose your harware_interface")
)
declared_arguments.append(
DeclareLaunchArgument("launch_controllers",
default_value="true",
description="Launch controllers?")
)
declared_arguments.append(
DeclareLaunchArgument("gazebo_gui",
default_value="true",
description="Launch gazebo with gui?")
)
# training arguments
declared_arguments.append(
DeclareLaunchArgument(
"env",
default_value="Reach-Gazebo-v0",
description="Environment ID",
))
declared_arguments.append(
DeclareLaunchArgument(
"env_kwargs",
default_value="",
description="Optional keyword argument to pass to the env constructor.",
))
declared_arguments.append(
DeclareLaunchArgument(
"vec_env",
default_value="dummy",
description="Type of VecEnv to use (dummy or subproc).",
))
# Algorithm and training
declared_arguments.append(
DeclareLaunchArgument(
"algo",
default_value="sac",
description="RL algorithm to use during the training.",
))
declared_arguments.append(
DeclareLaunchArgument(
"n_timesteps",
default_value="-1",
description="Overwrite the number of timesteps.",
))
declared_arguments.append(
DeclareLaunchArgument(
"hyperparams",
default_value="",
description="Optional RL hyperparameter overwrite (e.g. learning_rate:0.01 train_freq:10).",
))
declared_arguments.append(
DeclareLaunchArgument(
"num_threads",
default_value="-1",
description="Number of threads for PyTorch (-1 to use default).",
))
# Continue training an already trained agent
declared_arguments.append(
DeclareLaunchArgument(
"trained_agent",
default_value="",
description="Path to a pretrained agent to continue training.",
))
# Random seed
declared_arguments.append(
DeclareLaunchArgument(
"seed",
default_value="84",
description="Random generator seed.",
))
# Saving of model
declared_arguments.append(
DeclareLaunchArgument(
"save_freq",
default_value="10000",
description="Save the model every n steps (if negative, no checkpoint).",
))
declared_arguments.append(
DeclareLaunchArgument(
"save_replay_buffer",
default_value="False",
description="Save the replay buffer too (when applicable).",
))
# Pre-load a replay buffer and start training on it
declared_arguments.append(
DeclareLaunchArgument(
"preload_replay_buffer",
default_value="",
description="Path to a replay buffer that should be preloaded before starting the training process.",
))
# Logging
declared_arguments.append(
DeclareLaunchArgument(
"log_folder",
default_value="logs",
description="Path to the log directory.",
))
declared_arguments.append(
DeclareLaunchArgument(
"tensorboard_log",
default_value="tensorboard_logs",
description="Tensorboard log dir.",
))
declared_arguments.append(
DeclareLaunchArgument(
"log_interval",
default_value="-1",
description="Override log interval (default: -1, no change).",
))
declared_arguments.append(
DeclareLaunchArgument(
"uuid",
default_value="False",
description="Ensure that the run has a unique ID.",
))
declared_arguments.append(
DeclareLaunchArgument(
"sampler",
default_value="tpe",
description="Sampler to use when optimizing hyperparameters (random, tpe or skopt).",
))
declared_arguments.append(
DeclareLaunchArgument(
"pruner",
default_value="median",
description="Pruner to use when optimizing hyperparameters (halving, median or none).",
))
declared_arguments.append(
DeclareLaunchArgument(
"n_trials",
default_value="10",
description="Number of trials for optimizing hyperparameters.",
))
declared_arguments.append(
DeclareLaunchArgument(
"n_startup_trials",
default_value="5",
description="Number of trials before using optuna sampler.",
))
declared_arguments.append(
DeclareLaunchArgument(
"n_evaluations",
default_value="2",
description="Number of evaluations for hyperparameter optimization.",
))
declared_arguments.append(
DeclareLaunchArgument(
"n_jobs",
default_value="1",
description="Number of parallel jobs when optimizing hyperparameters.",
))
declared_arguments.append(
DeclareLaunchArgument(
"storage",
default_value="",
description="Database storage path if distributed optimization should be used.",
))
declared_arguments.append(
DeclareLaunchArgument(
"study_name",
default_value="",
description="Study name for distributed optimization.",
))
# Evaluation
declared_arguments.append(
DeclareLaunchArgument(
"eval_freq",
default_value="-1",
description="Evaluate the agent every n steps (if negative, no evaluation).",
))
declared_arguments.append(
DeclareLaunchArgument(
"eval_episodes",
default_value="5",
description="Number of episodes to use for evaluation.",
))
# Verbosity
declared_arguments.append(
DeclareLaunchArgument(
"verbose",
default_value="1",
description="Verbose mode (0: no output, 1: INFO).",
))
# HER specifics
declared_arguments.append(
DeclareLaunchArgument(
"truncate_last_trajectory",
default_value="True",
description="When using HER with online sampling the last trajectory in the replay buffer will be truncated after) reloading the replay buffer."
)),
declared_arguments.append(
DeclareLaunchArgument(
"log_level",
default_value="error",
description="The level of logging that is applied to all ROS 2 nodes launched by this script.",
))
# env_variables = [
# SetEnvironmentVariable(name="OMP_DYNAMIC", value="TRUE"),
# SetEnvironmentVariable(name="OMP_NUM_THREADS", value=str(cpu_count() // 2))
# ]
return LaunchDescription(declared_arguments + [OpaqueFunction(function=launch_setup)])

View file

@ -1,487 +0,0 @@
import os
from os import cpu_count
import xacro
from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch.actions import (
DeclareLaunchArgument,
IncludeLaunchDescription,
OpaqueFunction,
SetEnvironmentVariable,
TimerAction,
)
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch.substitutions import LaunchConfiguration, PathJoinSubstitution
from launch_ros.actions import Node
from launch_ros.substitutions import FindPackageShare
import yaml
def launch_setup(context, *args, **kwargs):
# Initialize Arguments
robot_type = LaunchConfiguration("robot_type")
# General arguments
with_gripper_condition = LaunchConfiguration("with_gripper")
description_package = LaunchConfiguration("description_package")
description_file = LaunchConfiguration("description_file")
use_moveit = LaunchConfiguration("use_moveit")
moveit_config_package = LaunchConfiguration("moveit_config_package")
moveit_config_file = LaunchConfiguration("moveit_config_file")
use_sim_time = LaunchConfiguration("use_sim_time")
scene_config_file = LaunchConfiguration("scene_config_file").perform(context)
ee_link_name = LaunchConfiguration("ee_link_name").perform(context)
base_link_name = LaunchConfiguration("base_link_name").perform(context)
control_space = LaunchConfiguration("control_space").perform(context)
control_strategy = LaunchConfiguration("control_strategy").perform(context)
interactive = LaunchConfiguration("interactive").perform(context)
# training arguments
env = LaunchConfiguration("env")
use_sim_time = LaunchConfiguration("use_sim_time")
log_level = LaunchConfiguration("log_level")
env_kwargs = LaunchConfiguration("env_kwargs")
description_package_abs_path = get_package_share_directory(
description_package.perform(context)
)
simulation_controllers = os.path.join(
description_package_abs_path, "config", "controllers.yaml"
)
xacro_file = os.path.join(
description_package_abs_path,
"urdf",
description_file.perform(context),
)
xacro_config_file = f"{description_package_abs_path}/config/xacro_args.yaml"
# TODO: hide this to another place
# Load xacro_args
def param_constructor(loader, node, local_vars):
value = loader.construct_scalar(node)
return LaunchConfiguration(value).perform(
local_vars.get("context", "Launch context if not defined")
)
def variable_constructor(loader, node, local_vars):
value = loader.construct_scalar(node)
return local_vars.get(value, f"Variable '{value}' not found")
def load_xacro_args(yaml_file, local_vars):
# Get valut from ros2 argument
yaml.add_constructor(
"!param", lambda loader, node: param_constructor(loader, node, local_vars)
)
# Get value from local variable in this code
# The local variable should be initialized before the loader was called
yaml.add_constructor(
"!variable",
lambda loader, node: variable_constructor(loader, node, local_vars),
)
with open(yaml_file, "r") as file:
return yaml.load(file, Loader=yaml.FullLoader)
mappings_data = load_xacro_args(xacro_config_file, locals())
robot_description_doc = xacro.process_file(xacro_file, mappings=mappings_data)
robot_description_semantic_content = ""
if use_moveit.perform(context) == "true":
srdf_config_file = f"{description_package_abs_path}/config/srdf_xacro_args.yaml"
srdf_file = os.path.join(
get_package_share_directory(moveit_config_package.perform(context)),
"srdf",
moveit_config_file.perform(context),
)
srdf_mappings = load_xacro_args(srdf_config_file, locals())
robot_description_semantic_content = xacro.process_file(srdf_file, mappings=srdf_mappings)
robot_description_semantic_content = robot_description_semantic_content.toprettyxml(indent=" ")
control_space = "joint"
control_strategy = "position"
interactive = "false"
robot_description_content = robot_description_doc.toprettyxml(indent=" ")
robot_description = {"robot_description": robot_description_content}
single_robot_setup = IncludeLaunchDescription(
PythonLaunchDescriptionSource(
[
PathJoinSubstitution(
[FindPackageShare("rbs_bringup"), "launch", "rbs_robot.launch.py"]
)
]
),
launch_arguments={
"with_gripper": with_gripper_condition,
"controllers_file": simulation_controllers,
"robot_type": robot_type,
"description_package": description_package,
"description_file": description_file,
"robot_name": robot_type,
"use_moveit": "false",
"moveit_config_package": moveit_config_package,
"moveit_config_file": moveit_config_file,
"use_sim_time": "true",
"use_skills": "false",
"use_controllers": "true",
"robot_description": robot_description_content,
"robot_description_semantic": robot_description_semantic_content,
"base_link_name": base_link_name,
"ee_link_name": ee_link_name,
"control_space": control_space,
"control_strategy": "effort",
"interactive_control": "false",
}.items(),
)
args = [
"--env",
env,
"--env-kwargs",
env_kwargs,
"--ros-args",
"--log-level",
log_level,
]
rl_task = Node(
package="rbs_gym",
executable="test_agent",
output="log",
arguments=args,
parameters=[{"use_sim_time": True}, robot_description],
)
clock_bridge = Node(
package="ros_gz_bridge",
executable="parameter_bridge",
arguments=["/clock@rosgraph_msgs/msg/Clock[ignition.msgs.Clock"],
output="screen",
)
delay_robot_control_stack = TimerAction(period=10.0, actions=[single_robot_setup])
nodes_to_start = [
# env,
# rl_task,
clock_bridge,
delay_robot_control_stack,
]
return nodes_to_start
def generate_launch_description():
declared_arguments = []
declared_arguments.append(
DeclareLaunchArgument(
"robot_type",
description="Type of robot by name",
choices=[
"rbs_arm",
"ar4",
"ur3",
"ur3e",
"ur5",
"ur5e",
"ur10",
"ur10e",
"ur16e",
],
default_value="rbs_arm",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"description_package",
default_value="rbs_arm",
description="Description package with robot URDF/XACRO files. Usually the argument \
is not set, it enables use of a custom description.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"description_file",
default_value="rbs_arm_modular.xacro",
description="URDF/XACRO description file with the robot.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"robot_name",
default_value="arm0",
description="Name for robot, used to apply namespace for specific robot in multirobot setup",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"moveit_config_package",
default_value="rbs_arm",
description="MoveIt config package with robot SRDF/XACRO files. Usually the argument \
is not set, it enables use of a custom moveit config.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"moveit_config_file",
default_value="rbs_arm.srdf.xacro",
description="MoveIt SRDF/XACRO description file with the robot.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"use_sim_time",
default_value="true",
description="Make MoveIt to use simulation time.\
This is needed for the trajectory planing in simulation.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"with_gripper", default_value="true", description="With gripper or not?"
)
)
declared_arguments.append(
DeclareLaunchArgument(
"use_moveit", default_value="false", description="Launch moveit?"
)
)
declared_arguments.append(
DeclareLaunchArgument(
"launch_perception", default_value="false", description="Launch perception?"
)
)
declared_arguments.append(
DeclareLaunchArgument(
"use_controllers",
default_value="true",
description="Launch controllers?",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"scene_config_file",
default_value="",
description="Path to a scene configuration file",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"ee_link_name",
default_value="",
description="End effector name of robot arm",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"base_link_name",
default_value="",
description="Base link name if robot arm",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"control_space",
default_value="task",
choices=["task", "joint"],
description="Specify the control space for the robot (e.g., task space).",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"control_strategy",
default_value="position",
choices=["position", "velocity", "effort"],
description="Specify the control strategy (e.g., position control).",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"interactive",
default_value="true",
description="Wheter to run the motion_control_handle controller",
),
)
# training arguments
declared_arguments.append(
DeclareLaunchArgument(
"env",
default_value="Reach-Gazebo-v0",
description="Environment ID",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"env_kwargs",
default_value="",
description="Optional keyword argument to pass to the env constructor.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"vec_env",
default_value="dummy",
description="Type of VecEnv to use (dummy or subproc).",
)
)
# Algorithm and training
declared_arguments.append(
DeclareLaunchArgument(
"algo",
default_value="sac",
description="RL algorithm to use during the training.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"n_timesteps",
default_value="-1",
description="Overwrite the number of timesteps.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"hyperparams",
default_value="",
description="Optional RL hyperparameter overwrite (e.g. learning_rate:0.01 train_freq:10).",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"num_threads",
default_value="-1",
description="Number of threads for PyTorch (-1 to use default).",
)
)
# Continue training an already trained agent
declared_arguments.append(
DeclareLaunchArgument(
"trained_agent",
default_value="",
description="Path to a pretrained agent to continue training.",
)
)
# Random seed
declared_arguments.append(
DeclareLaunchArgument(
"seed",
default_value="-1",
description="Random generator seed.",
)
)
# Saving of model
declared_arguments.append(
DeclareLaunchArgument(
"save_freq",
default_value="10000",
description="Save the model every n steps (if negative, no checkpoint).",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"save_replay_buffer",
default_value="False",
description="Save the replay buffer too (when applicable).",
)
)
# Pre-load a replay buffer and start training on it
declared_arguments.append(
DeclareLaunchArgument(
"preload_replay_buffer",
default_value="",
description="Path to a replay buffer that should be preloaded before starting the training process.",
)
)
# Logging
declared_arguments.append(
DeclareLaunchArgument(
"log_folder",
default_value="logs",
description="Path to the log directory.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"tensorboard_log",
default_value="tensorboard_logs",
description="Tensorboard log dir.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"log_interval",
default_value="-1",
description="Override log interval (default: -1, no change).",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"uuid",
default_value="False",
description="Ensure that the run has a unique ID.",
)
)
# Evaluation
declared_arguments.append(
DeclareLaunchArgument(
"eval_freq",
default_value="-1",
description="Evaluate the agent every n steps (if negative, no evaluation).",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"eval_episodes",
default_value="5",
description="Number of episodes to use for evaluation.",
)
)
# Verbosity
declared_arguments.append(
DeclareLaunchArgument(
"verbose",
default_value="1",
description="Verbose mode (0: no output, 1: INFO).",
)
)
# HER specifics
declared_arguments.append(
DeclareLaunchArgument(
"truncate_last_trajectory",
default_value="True",
description="When using HER with online sampling the last trajectory in the replay buffer will be truncated after) reloading the replay buffer.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"log_level",
default_value="error",
description="The level of logging that is applied to all ROS 2 nodes launched by this script.",
)
)
cpu_c = cpu_count()
if cpu_c is not None:
env_variables = [
SetEnvironmentVariable(name="OMP_DYNAMIC", value="TRUE"),
SetEnvironmentVariable(name="OMP_NUM_THREADS", value=str(cpu_c // 2)),
]
else:
env_variables = [
SetEnvironmentVariable(name="OMP_DYNAMIC", value="TRUE"),
]
return LaunchDescription(
declared_arguments + [OpaqueFunction(function=launch_setup)] + env_variables
)

View file

@ -1,518 +0,0 @@
from launch import LaunchDescription
from launch.actions import (
DeclareLaunchArgument,
IncludeLaunchDescription,
OpaqueFunction,
SetEnvironmentVariable,
TimerAction
)
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch.substitutions import LaunchConfiguration, PathJoinSubstitution
from launch_ros.substitutions import FindPackageShare
from launch_ros.actions import Node
import os
from os import cpu_count
from ament_index_python.packages import get_package_share_directory
import yaml
import xacro
def launch_setup(context, *args, **kwargs):
# Initialize Arguments
robot_type = LaunchConfiguration("robot_type")
# General arguments
with_gripper_condition = LaunchConfiguration("with_gripper")
description_package = LaunchConfiguration("description_package")
description_file = LaunchConfiguration("description_file")
use_moveit = LaunchConfiguration("use_moveit")
moveit_config_package = LaunchConfiguration("moveit_config_package")
moveit_config_file = LaunchConfiguration("moveit_config_file")
use_sim_time = LaunchConfiguration("use_sim_time")
scene_config_file = LaunchConfiguration("scene_config_file").perform(context)
ee_link_name = LaunchConfiguration("ee_link_name").perform(context)
base_link_name = LaunchConfiguration("base_link_name").perform(context)
control_space = LaunchConfiguration("control_space").perform(context)
control_strategy = LaunchConfiguration("control_strategy").perform(context)
interactive = LaunchConfiguration("interactive").perform(context)
# training arguments
env = LaunchConfiguration("env")
algo = LaunchConfiguration("algo")
hyperparams = LaunchConfiguration("hyperparams")
n_timesteps = LaunchConfiguration("n_timesteps")
num_threads = LaunchConfiguration("num_threads")
seed = LaunchConfiguration("seed")
trained_agent = LaunchConfiguration("trained_agent")
save_freq = LaunchConfiguration("save_freq")
save_replay_buffer = LaunchConfiguration("save_replay_buffer")
preload_replay_buffer = LaunchConfiguration("preload_replay_buffer")
log_folder = LaunchConfiguration("log_folder")
tensorboard_log = LaunchConfiguration("tensorboard_log")
log_interval = LaunchConfiguration("log_interval")
uuid = LaunchConfiguration("uuid")
eval_freq = LaunchConfiguration("eval_freq")
eval_episodes = LaunchConfiguration("eval_episodes")
verbose = LaunchConfiguration("verbose")
truncate_last_trajectory = LaunchConfiguration("truncate_last_trajectory")
use_sim_time = LaunchConfiguration("use_sim_time")
log_level = LaunchConfiguration("log_level")
env_kwargs = LaunchConfiguration("env_kwargs")
track = LaunchConfiguration("track")
description_package_abs_path = get_package_share_directory(
description_package.perform(context)
)
simulation_controllers = os.path.join(
description_package_abs_path, "config", "controllers.yaml"
)
xacro_file = os.path.join(
description_package_abs_path,
"urdf",
description_file.perform(context),
)
xacro_config_file = f"{description_package_abs_path}/config/xacro_args.yaml"
# TODO: hide this to another place
# Load xacro_args
def param_constructor(loader, node, local_vars):
value = loader.construct_scalar(node)
return LaunchConfiguration(value).perform(
local_vars.get("context", "Launch context if not defined")
)
def variable_constructor(loader, node, local_vars):
value = loader.construct_scalar(node)
return local_vars.get(value, f"Variable '{value}' not found")
def load_xacro_args(yaml_file, local_vars):
# Get valut from ros2 argument
yaml.add_constructor(
"!param", lambda loader, node: param_constructor(loader, node, local_vars)
)
# Get value from local variable in this code
# The local variable should be initialized before the loader was called
yaml.add_constructor(
"!variable",
lambda loader, node: variable_constructor(loader, node, local_vars),
)
with open(yaml_file, "r") as file:
return yaml.load(file, Loader=yaml.FullLoader)
mappings_data = load_xacro_args(xacro_config_file, locals())
robot_description_doc = xacro.process_file(xacro_file, mappings=mappings_data)
robot_description_semantic_content = ""
if use_moveit.perform(context) == "true":
srdf_config_file = f"{description_package_abs_path}/config/srdf_xacro_args.yaml"
srdf_file = os.path.join(
get_package_share_directory(moveit_config_package.perform(context)),
"srdf",
moveit_config_file.perform(context),
)
srdf_mappings = load_xacro_args(srdf_config_file, locals())
robot_description_semantic_content = xacro.process_file(srdf_file, mappings=srdf_mappings)
robot_description_semantic_content = robot_description_semantic_content.toprettyxml(indent=" ")
control_space = "joint"
control_strategy = "position"
interactive = "false"
robot_description_content = robot_description_doc.toprettyxml(indent=" ")
robot_description = {"robot_description": robot_description_content}
single_robot_setup = IncludeLaunchDescription(
PythonLaunchDescriptionSource(
[
PathJoinSubstitution(
[FindPackageShare("rbs_bringup"), "launch", "rbs_robot.launch.py"]
)
]
),
launch_arguments={
"with_gripper": with_gripper_condition,
"controllers_file": simulation_controllers,
"robot_type": robot_type,
"description_package": description_package,
"description_file": description_file,
"robot_name": robot_type,
"use_moveit": use_moveit,
"moveit_config_package": moveit_config_package,
"moveit_config_file": moveit_config_file,
"use_sim_time": use_sim_time,
"use_controllers": "true",
"robot_description": robot_description_content,
"robot_description_semantic": robot_description_semantic_content,
"base_link_name": base_link_name,
"ee_link_name": ee_link_name,
"control_space": control_space,
"control_strategy": control_strategy,
"interactive_control": interactive,
}.items(),
)
args = [
"--env",
env,
"--env-kwargs",
env_kwargs,
"--algo",
algo,
"--hyperparams",
hyperparams,
"--n-timesteps",
n_timesteps,
"--num-threads",
num_threads,
"--seed",
seed,
"--trained-agent",
trained_agent,
"--save-freq",
save_freq,
"--save-replay-buffer",
save_replay_buffer,
"--preload-replay-buffer",
preload_replay_buffer,
"--log-folder",
log_folder,
"--tensorboard-log",
tensorboard_log,
"--log-interval",
log_interval,
"--uuid",
uuid,
"--eval-freq",
eval_freq,
"--eval-episodes",
eval_episodes,
"--verbose",
verbose,
"--track",
track,
"--truncate-last-trajectory",
truncate_last_trajectory,
"--ros-args",
"--log-level",
log_level,
]
clock_bridge = Node(
package='ros_gz_bridge',
executable='parameter_bridge',
arguments=['/clock@rosgraph_msgs/msg/Clock[ignition.msgs.Clock'],
output='screen')
rl_task = Node(
package="rbs_gym",
executable="train",
output="log",
arguments=args,
parameters=[{"use_sim_time": True}]
)
delay_robot_control_stack = TimerAction(
period=20.0,
actions=[single_robot_setup]
)
nodes_to_start = [
# env,
rl_task,
clock_bridge,
delay_robot_control_stack
]
return nodes_to_start
def generate_launch_description():
declared_arguments = []
declared_arguments.append(
DeclareLaunchArgument(
"robot_type",
description="Type of robot by name",
choices=[
"rbs_arm",
"ar4",
"ur3",
"ur3e",
"ur5",
"ur5e",
"ur10",
"ur10e",
"ur16e",
],
default_value="rbs_arm",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"description_package",
default_value="rbs_arm",
description="Description package with robot URDF/XACRO files. Usually the argument \
is not set, it enables use of a custom description.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"description_file",
default_value="rbs_arm_modular.xacro",
description="URDF/XACRO description file with the robot.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"robot_name",
default_value="arm0",
description="Name for robot, used to apply namespace for specific robot in multirobot setup",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"moveit_config_package",
default_value="rbs_arm",
description="MoveIt config package with robot SRDF/XACRO files. Usually the argument \
is not set, it enables use of a custom moveit config.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"moveit_config_file",
default_value="rbs_arm.srdf.xacro",
description="MoveIt SRDF/XACRO description file with the robot.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"use_sim_time",
default_value="true",
description="Make MoveIt to use simulation time.\
This is needed for the trajectory planing in simulation.",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"with_gripper", default_value="true", description="With gripper or not?"
)
)
declared_arguments.append(
DeclareLaunchArgument(
"use_moveit", default_value="false", description="Launch moveit?"
)
)
declared_arguments.append(
DeclareLaunchArgument(
"launch_perception", default_value="false", description="Launch perception?"
)
)
declared_arguments.append(
DeclareLaunchArgument(
"use_controllers",
default_value="true",
description="Launch controllers?",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"scene_config_file",
default_value="",
description="Path to a scene configuration file",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"ee_link_name",
default_value="",
description="End effector name of robot arm",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"base_link_name",
default_value="",
description="Base link name if robot arm",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"control_space",
default_value="task",
choices=["task", "joint"],
description="Specify the control space for the robot (e.g., task space).",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"control_strategy",
default_value="position",
choices=["position", "velocity", "effort"],
description="Specify the control strategy (e.g., position control).",
)
)
declared_arguments.append(
DeclareLaunchArgument(
"interactive",
default_value="true",
description="Wheter to run the motion_control_handle controller",
),
)
# training arguments
declared_arguments.append(
DeclareLaunchArgument(
"env",
default_value="Reach-Gazebo-v0",
description="Environment ID",
))
declared_arguments.append(
DeclareLaunchArgument(
"env_kwargs",
default_value="",
description="Optional keyword argument to pass to the env constructor.",
))
declared_arguments.append(
DeclareLaunchArgument(
"vec_env",
default_value="dummy",
description="Type of VecEnv to use (dummy or subproc).",
))
# Algorithm and training
declared_arguments.append(
DeclareLaunchArgument(
"algo",
default_value="sac",
description="RL algorithm to use during the training.",
))
declared_arguments.append(
DeclareLaunchArgument(
"n_timesteps",
default_value="-1",
description="Overwrite the number of timesteps.",
))
declared_arguments.append(
DeclareLaunchArgument(
"hyperparams",
default_value="",
description="Optional RL hyperparameter overwrite (e.g. learning_rate:0.01 train_freq:10).",
))
declared_arguments.append(
DeclareLaunchArgument(
"num_threads",
default_value="-1",
description="Number of threads for PyTorch (-1 to use default).",
))
# Continue training an already trained agent
declared_arguments.append(
DeclareLaunchArgument(
"trained_agent",
default_value="",
description="Path to a pretrained agent to continue training.",
))
# Random seed
declared_arguments.append(
DeclareLaunchArgument(
"seed",
default_value="-1",
description="Random generator seed.",
))
# Saving of model
declared_arguments.append(
DeclareLaunchArgument(
"save_freq",
default_value="10000",
description="Save the model every n steps (if negative, no checkpoint).",
))
declared_arguments.append(
DeclareLaunchArgument(
"save_replay_buffer",
default_value="False",
description="Save the replay buffer too (when applicable).",
))
# Pre-load a replay buffer and start training on it
declared_arguments.append(
DeclareLaunchArgument(
"preload_replay_buffer",
default_value="",
description="Path to a replay buffer that should be preloaded before starting the training process.",
))
# Logging
declared_arguments.append(
DeclareLaunchArgument(
"log_folder",
default_value="logs",
description="Path to the log directory.",
))
declared_arguments.append(
DeclareLaunchArgument(
"tensorboard_log",
default_value="tensorboard_logs",
description="Tensorboard log dir.",
))
declared_arguments.append(
DeclareLaunchArgument(
"log_interval",
default_value="-1",
description="Override log interval (default: -1, no change).",
))
declared_arguments.append(
DeclareLaunchArgument(
"uuid",
default_value="False",
description="Ensure that the run has a unique ID.",
))
# Evaluation
declared_arguments.append(
DeclareLaunchArgument(
"eval_freq",
default_value="-1",
description="Evaluate the agent every n steps (if negative, no evaluation).",
))
declared_arguments.append(
DeclareLaunchArgument(
"eval_episodes",
default_value="5",
description="Number of episodes to use for evaluation.",
))
# Verbosity
declared_arguments.append(
DeclareLaunchArgument(
"verbose",
default_value="1",
description="Verbose mode (0: no output, 1: INFO).",
))
declared_arguments.append(
DeclareLaunchArgument(
"truncate_last_trajectory",
default_value="True",
description="When using HER with online sampling the last trajectory in the replay buffer will be truncated after) reloading the replay buffer."
))
declared_arguments.append(
DeclareLaunchArgument(
"log_level",
default_value="error",
description="The level of logging that is applied to all ROS 2 nodes launched by this script.",
))
declared_arguments.append(
DeclareLaunchArgument(
"track",
default_value="true",
description="The level of logging that is applied to all ROS 2 nodes launched by this script.",
))
env_variables = [
SetEnvironmentVariable(name="OMP_DYNAMIC", value="TRUE"),
SetEnvironmentVariable(name="OMP_NUM_THREADS", value=str(cpu_count() // 2))
]
return LaunchDescription(declared_arguments + [OpaqueFunction(function=launch_setup)] + env_variables)

View file

@ -1,18 +0,0 @@
<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>rbs_gym</name>
<version>0.0.0</version>
<description>TODO: Package description</description>
<maintainer email="ur.narmak@gmail.com">narmak</maintainer>
<license>Apache-2.0</license>
<test_depend>ament_copyright</test_depend>
<test_depend>ament_flake8</test_depend>
<test_depend>ament_pep257</test_depend>
<test_depend>python3-pytest</test_depend>
<export>
<build_type>ament_python</build_type>
</export>
</package>

View file

@ -1,183 +0,0 @@
# Note: The `open3d` and `stable_baselines3` modules must be imported prior to `gym_gz`
from env_manager.models.configs import (
RobotData,
SceneData,
SphereObjectData,
TerrainData,
)
from env_manager.models.configs.objects import ObjectRandomizerData
from env_manager.models.configs.robot import RobotRandomizerData
from env_manager.models.configs.scene import PluginsData
# import open3d # isort:skip
import stable_baselines3 # isort:skip
# Note: If installed, `tensorflow` module must be imported before `gym_gz`/`scenario`
# Otherwise, protobuf version incompatibility will cause an error
try:
from importlib.util import find_spec
if find_spec("tensorflow") is not None:
import tensorflow
except:
pass
from os import environ, path
from typing import Any, Dict, Tuple
import numpy as np
from ament_index_python.packages import get_package_share_directory
from gymnasium.envs.registration import register
from rbs_assets_library import get_world_file
from rbs_gym.utils.utils import str2bool
from . import tasks
######################
# Runtime Entrypoint #
######################
RBS_ENVS_TASK_ENTRYPOINT: str = "gym_gz.runtimes.gazebo_runtime:GazeboRuntime"
###################
# Robot Specifics #
###################
# Default robot model to use in the tasks where robot can be static
RBS_ENVS_ROBOT_MODEL: str = "rbs_arm"
######################
# Datasets and paths #
######################
# Path to directory containing base SDF worlds
RBS_ENVS_WORLDS_DIR: str = path.join(get_package_share_directory("rbs_gym"), "worlds")
###########
# Presets #
###########
# Gravity preset for Earth
# GRAVITY_EARTH: Tuple[float, float, float] = (0.0, 0.0, -9.80665)
# GRAVITY_EARTH_STD: Tuple[float, float, float] = (0.0, 0.0, 0.0232)
############################
# Additional Configuration #
############################
# BROADCAST_GUI: bool = str2bool(
# environ.get("RBS_ENVS_BROADCAST_INTERACTIVE_GUI", default=True)
# )
#########
# Reach #
#########
REACH_MAX_EPISODE_STEPS: int = 100
REACH_KWARGS: Dict[str, Any] = {
"agent_rate": 4.0,
"robot_model": RBS_ENVS_ROBOT_MODEL,
"workspace_frame_id": "world",
"workspace_centre": (-0.45, 0.0, 0.35),
"workspace_volume": (0.7, 0.7, 0.7),
"ignore_new_actions_while_executing": False,
"use_servo": True,
"scaling_factor_translation": 8.0,
"scaling_factor_rotation": 3.0,
"restrict_position_goal_to_workspace": False,
"enable_gripper": False,
"sparse_reward": False,
"collision_reward": -10.0,
"act_quick_reward": -0.01,
"required_accuracy": 0.05,
"num_threads": 3,
}
REACH_KWARGS_SIM: Dict[str, Any] = {
"physics_rate": 1000.0,
"real_time_factor": float(np.finfo(np.float32).max),
"world": get_world_file("default"),
}
REACH_RANDOMIZER: str = "rbs_gym.envs.randomizers:ManipulationGazeboEnvRandomizer"
SCENE_CONFIGURATION: SceneData = SceneData(
physics_rollouts_num=0,
robot=RobotData(
name="rbs_arm",
joint_positions=[0.0, 0.5, 3.14159, 1.5, 0.0, 1.4, 0.0],
with_gripper=True,
gripper_joint_positions=0.00,
randomizer=RobotRandomizerData(joint_positions=True),
),
objects=[
SphereObjectData(
name="sphere",
relative_to="base_link",
position=(0.0, 0.3, 0.5),
static=True,
collision=False,
color=(0.0, 1.0, 0.0, 0.8),
randomize=ObjectRandomizerData(random_pose=True, models_rollouts_num=2),
)
],
plugins=PluginsData(
scene_broadcaster=True, user_commands=True, fts_broadcaster=True
),
)
# Task
register(
id="Reach-v0",
entry_point=RBS_ENVS_TASK_ENTRYPOINT,
max_episode_steps=REACH_MAX_EPISODE_STEPS,
kwargs={
"task_cls": tasks.Reach,
**REACH_KWARGS,
},
)
register(
id="Reach-ColorImage-v0",
entry_point=RBS_ENVS_TASK_ENTRYPOINT,
max_episode_steps=REACH_MAX_EPISODE_STEPS,
kwargs={
"task_cls": tasks.ReachColorImage,
**REACH_KWARGS,
},
)
register(
id="Reach-DepthImage-v0",
entry_point=RBS_ENVS_TASK_ENTRYPOINT,
max_episode_steps=REACH_MAX_EPISODE_STEPS,
kwargs={
"task_cls": tasks.ReachDepthImage,
**REACH_KWARGS,
},
)
# Gazebo wrapper
register(
id="Reach-Gazebo-v0",
entry_point=REACH_RANDOMIZER,
max_episode_steps=REACH_MAX_EPISODE_STEPS,
kwargs={"env": "Reach-v0", **REACH_KWARGS_SIM, "scene_args": SCENE_CONFIGURATION},
)
register(
id="Reach-ColorImage-Gazebo-v0",
entry_point=REACH_RANDOMIZER,
max_episode_steps=REACH_MAX_EPISODE_STEPS,
kwargs={
"env": "Reach-ColorImage-v0",
**REACH_KWARGS_SIM,
"scene_args": SCENE_CONFIGURATION,
},
)
register(
id="Reach-DepthImage-Gazebo-v0",
entry_point=REACH_RANDOMIZER,
max_episode_steps=REACH_MAX_EPISODE_STEPS,
kwargs={
"env": "Reach-DepthImage-v0",
**REACH_KWARGS_SIM,
"scene_args": SCENE_CONFIGURATION,
},
)

View file

@ -1,3 +0,0 @@
from .cartesian_force_controller import CartesianForceController
from .grippper_controller import GripperController
from .joint_effort_controller import JointEffortController

View file

@ -1,41 +0,0 @@
from typing import Optional
from geometry_msgs.msg import WrenchStamped
from rclpy.node import Node
from rclpy.parameter import Parameter
class CartesianForceController:
def __init__(self, node, namespace: Optional[str] = "") -> None:
self.node = node
self.publisher = node.create_publisher(WrenchStamped,
namespace + "/cartesian_force_controller/target_wrench", 10)
self.timer = node.create_timer(0.1, self.timer_callback)
self.publish = False
self._target_wrench = WrenchStamped()
@property
def target_wrench(self) -> WrenchStamped:
return self._target_wrench
@target_wrench.setter
def target_wrench(self, wrench: WrenchStamped):
self._target_wrench = wrench
def timer_callback(self):
if self.publish:
self.publisher.publish(self._target_wrench)
class CartesianForceControllerStandalone(Node, CartesianForceController):
def __init__(self, node_name:str = "rbs_gym_controller", use_sim_time: bool = True):
try:
rclpy.init()
except Exception as e:
if not rclpy.ok():
sys.exit(f"ROS 2 context could not be initialised: {e}")
Node.__init__(self, node_name)
self.set_parameters(
[Parameter("use_sim_time", type_=Parameter.Type.BOOL, value=use_sim_time)]
)
CartesianForceController.__init__(self, node=self)

View file

@ -1,121 +0,0 @@
import rclpy
from rclpy.node import Node
import numpy as np
import quaternion
from geometry_msgs.msg import Twist
from geometry_msgs.msg import PoseStamped
import tf2_ros
import sys
import time
import threading
import os
class VelocityController:
"""Convert Twist messages to PoseStamped
Use this node to integrate twist messages into a moving target pose in
Cartesian space. An initial TF lookup assures that the target pose always
starts at the robot's end-effector.
"""
def __init__(self, node: Node, topic_pose: str, topic_twist: str, base_frame: str, ee_frame: str):
self.node = node
self._frame_id = base_frame
self._end_effector = ee_frame
self.tf_buffer = tf2_ros.Buffer()
self.tf_listener = tf2_ros.TransformListener(self.tf_buffer, self)
self.rot = np.quaternion(0, 0, 0, 1)
self.pos = [0, 0, 0]
self.pub = node.create_publisher(PoseStamped, topic_pose, 3)
self.sub = node.create_subscription(Twist, topic_twist, self.twist_cb, 1)
self.last = time.time()
self.startup_done = False
period = 1.0 / node.declare_parameter("publishing_rate", 100).value
self.timer = node.create_timer(period, self.publish)
self.publish_it = False
self.thread = threading.Thread(target=self.startup, daemon=True)
self.thread.start()
def startup(self):
"""Make sure to start at the robot's current pose"""
# Wait until we entered spinning in the main thread.
time.sleep(1)
try:
start = self.tf_buffer.lookup_transform(
target_frame=self._frame_id,
source_frame=self._end_effector,
time=rclpy.time.Time(),
)
except (
tf2_ros.InvalidArgumentException,
tf2_ros.LookupException,
tf2_ros.ConnectivityException,
tf2_ros.ExtrapolationException,
) as e:
print(f"Startup failed: {e}")
os._exit(1)
self.pos[0] = start.transform.translation.x
self.pos[1] = start.transform.translation.y
self.pos[2] = start.transform.translation.z
self.rot.x = start.transform.rotation.x
self.rot.y = start.transform.rotation.y
self.rot.z = start.transform.rotation.z
self.rot.w = start.transform.rotation.w
self.startup_done = True
def twist_cb(self, data):
"""Numerically integrate twist message into a pose
Use global self.frame_id as reference for the navigation commands.
"""
now = time.time()
dt = now - self.last
self.last = now
# Position update
self.pos[0] += data.linear.x * dt
self.pos[1] += data.linear.y * dt
self.pos[2] += data.linear.z * dt
# Orientation update
wx = data.angular.x
wy = data.angular.y
wz = data.angular.z
_, q = quaternion.integrate_angular_velocity(
lambda _: (wx, wy, wz), 0, dt, self.rot
)
self.rot = q[-1] # the last one is after dt passed
def publish(self):
if not self.startup_done:
return
if not self.publish_it:
return
try:
msg = PoseStamped()
msg.header.stamp = self.get_clock().now().to_msg()
msg.header.frame_id = self.frame_id
msg.pose.position.x = self.pos[0]
msg.pose.position.y = self.pos[1]
msg.pose.position.z = self.pos[2]
msg.pose.orientation.x = self.rot.x
msg.pose.orientation.y = self.rot.y
msg.pose.orientation.z = self.rot.z
msg.pose.orientation.w = self.rot.w
self.pub.publish(msg)
except Exception:
# Swallow 'publish() to closed topic' error.
# This rarely happens on killing this node.
pass

View file

@ -1,50 +0,0 @@
from typing import Optional
from control_msgs.action import GripperCommand
from rclpy.action import ActionClient
class GripperController:
def __init__(self, node,
open_position: Optional[float] = 0.0,
close_position: Optional[float] = 0.0,
max_effort: Optional[float] = 0.0,
namespace: Optional[str] = ""):
self._action_client = ActionClient(node, GripperCommand,
namespace + "/gripper_controller/gripper_cmd")
self._open_position = open_position
self._close_position = close_position
self._max_effort = max_effort
self.is_executed = False
def open(self):
self.send_goal(self._open_position)
def close(self):
self.send_goal(self._close_position)
def send_goal(self, goal: float):
goal_msg = GripperCommand.Goal()
goal_msg._command.position = goal
goal_msg._command.max_effort = self._max_effort
self._action_client.wait_for_server()
self._send_goal_future = self._action_client.send_goal_async(goal_msg)
self._send_goal_future.add_done_callback(self.goal_response_callback)
def goal_response_callback(self, future):
goal_handle = future.result()
if not goal_handle.accepted:
self.get_logger().info('Goal rejected :(')
return
self.get_logger().info('Goal accepted :)')
self._get_result_future = goal_handle.get_result_async()
self._get_result_future.add_done_callback(self.get_result_callback)
def get_result_callback(self, future):
result = future.result().result
self.get_logger().info(f"Gripper position: {result.position}")
def wait_until_executed(self):
while not self.is_executed:
pass

View file

@ -1,25 +0,0 @@
from typing import Optional
from std_msgs.msg import Float64MultiArray
class JointEffortController:
def __init__(self, node, namespace: Optional[str] = "") -> None:
self.node = node
self.publisher = node.create_publisher(Float64MultiArray,
namespace + "/joint_effort_controller/commands", 10)
# self.timer = node.create_timer(0.1, self.timer_callback)
# self.publish = True
self._effort_array = Float64MultiArray()
@property
def target_effort(self) -> Float64MultiArray:
return self._effort_array
@target_effort.setter
def target_effort(self, data: Float64MultiArray):
self._effort_array = data
# def timer_callback(self):
# if self.publish:
# self.publisher.publish(self._target_wrench)

View file

@ -1,4 +0,0 @@
from .camera_subscriber import CameraSubscriber, CameraSubscriberStandalone
from .twist_subscriber import TwistSubscriber
from .joint_states import JointStates
# from .octree import OctreeCreator

View file

@ -1,118 +0,0 @@
import sys
from threading import Lock, Thread
from typing import Optional, Union
import rclpy
from rclpy.callback_groups import CallbackGroup
from rclpy.executors import SingleThreadedExecutor
from rclpy.node import Node
from rclpy.parameter import Parameter
from rclpy.qos import (
QoSDurabilityPolicy,
QoSHistoryPolicy,
QoSProfile,
QoSReliabilityPolicy,
)
from sensor_msgs.msg import Image, PointCloud2
class CameraSubscriber:
def __init__(
self,
node: Node,
topic: str,
is_point_cloud: bool,
callback_group: Optional[CallbackGroup] = None,
):
self._node = node
# Prepare the subscriber
if is_point_cloud:
camera_msg_type = PointCloud2
else:
camera_msg_type = Image
self.__observation = camera_msg_type()
self._node.create_subscription(
msg_type=camera_msg_type,
topic=topic,
callback=self.observation_callback,
qos_profile=QoSProfile(
reliability=QoSReliabilityPolicy.RELIABLE,
durability=QoSDurabilityPolicy.VOLATILE,
history=QoSHistoryPolicy.KEEP_LAST,
depth=1,
),
callback_group=callback_group,
)
self.__observation_mutex = Lock()
self.__new_observation_available = False
def observation_callback(self, msg):
"""
Callback for getting observation.
"""
self.__observation_mutex.acquire()
self.__observation = msg
self.__new_observation_available = True
self._node.get_logger().debug("New observation received.")
self.__observation_mutex.release()
def get_observation(self) -> Union[PointCloud2, Image]:
"""
Get the last received observation.
"""
self.__observation_mutex.acquire()
observation = self.__observation
self.__observation_mutex.release()
return observation
def reset_new_observation_checker(self):
"""
Reset checker of new observations, i.e. `self.new_observation_available()`
"""
self.__observation_mutex.acquire()
self.__new_observation_available = False
self.__observation_mutex.release()
@property
def new_observation_available(self):
"""
Check if new observation is available since `self.reset_new_observation_checker()` was called
"""
return self.__new_observation_available
class CameraSubscriberStandalone(Node, CameraSubscriber):
def __init__(
self,
topic: str,
is_point_cloud: bool,
node_name: str = "rbs_gym_camera_sub",
use_sim_time: bool = True,
):
try:
rclpy.init()
except Exception as e:
if not rclpy.ok():
sys.exit(f"ROS 2 context could not be initialised: {e}")
Node.__init__(self, node_name)
self.set_parameters(
[Parameter("use_sim_time", type_=Parameter.Type.BOOL, value=use_sim_time)]
)
CameraSubscriber.__init__(
self, node=self, topic=topic, is_point_cloud=is_point_cloud
)
# Spin the node in a separate thread
self._executor = SingleThreadedExecutor()
self._executor.add_node(self)
self._executor_thread = Thread(target=self._executor.spin, daemon=True, args=())
self._executor_thread.start()

View file

@ -1,107 +0,0 @@
from array import array
from threading import Lock
from typing import Optional
from rclpy.callback_groups import CallbackGroup
from rclpy.node import Node
from rclpy.qos import (
QoSDurabilityPolicy,
QoSHistoryPolicy,
QoSProfile,
QoSReliabilityPolicy,
)
from sensor_msgs.msg import JointState
class JointStates:
def __init__(
self,
node: Node,
topic: str,
callback_group: Optional[CallbackGroup] = None,
):
self._node = node
self.__observation = JointState()
self._node.create_subscription(
msg_type=JointState,
topic=topic,
callback=self.observation_callback,
qos_profile=QoSProfile(
reliability=QoSReliabilityPolicy.RELIABLE,
durability=QoSDurabilityPolicy.VOLATILE,
history=QoSHistoryPolicy.KEEP_LAST,
depth=1,
),
callback_group=callback_group,
)
self.__observation_mutex = Lock()
self.__new_observation_available = False
self.__observation.position
def observation_callback(self, msg):
"""
Callback for getting observation.
"""
self.__observation_mutex.acquire()
self.__observation = msg
self.__new_observation_available = True
self._node.get_logger().debug("New observation received.")
self.__observation_mutex.release()
def get_observation(self) -> JointState:
"""
Get the last received observation.
"""
self.__observation_mutex.acquire()
observation = self.__observation
self.__observation_mutex.release()
return observation
def get_positions(self) -> array:
"""
Get the last recorded observation position
"""
self.__observation_mutex.acquire()
observation = self.__observation.position
self.__observation_mutex.release()
return observation
def get_velocities(self) -> array:
"""
Get the last recorded observation velocity
"""
self.__observation_mutex.acquire()
observation = self.__observation.velocity
self.__observation_mutex.release()
return observation
def get_efforts(self) -> array:
"""
Get the last recorded observation effort
"""
self.__observation_mutex.acquire()
observation = self.__observation.effort
self.__observation_mutex.release()
return observation
def reset_new_observation_checker(self):
"""
Reset checker of new observations, i.e. `self.new_observation_available()`
"""
self.__observation_mutex.acquire()
self.__new_observation_available = False
self.__observation_mutex.release()
@property
def new_observation_available(self):
"""
Check if new observation is available since `self.reset_new_observation_checker()` was called
"""
return self.__new_observation_available

View file

@ -1,211 +0,0 @@
from typing import List, Tuple
import numpy as np
import ocnn
import open3d
import torch
from rclpy.node import Node
from sensor_msgs.msg import PointCloud2
from env_manager.utils import Tf2Listener, conversions
class OctreeCreator:
def __init__(
self,
node: Node,
tf2_listener: Tf2Listener,
reference_frame_id: str,
min_bound: Tuple[float, float, float] = (-1.0, -1.0, -1.0),
max_bound: Tuple[float, float, float] = (1.0, 1.0, 1.0),
include_color: bool = False,
# Note: For efficiency, the first channel of RGB is used for intensity
include_intensity: bool = False,
depth: int = 4,
full_depth: int = 2,
adaptive: bool = False,
adp_depth: int = 4,
normals_radius: float = 0.05,
normals_max_nn: int = 10,
node_dis: bool = True,
node_feature: bool = False,
split_label: bool = False,
th_normal: float = 0.1,
th_distance: float = 2.0,
extrapolate: bool = False,
save_pts: bool = False,
key2xyz: bool = False,
debug_draw: bool = False,
debug_write_octree: bool = False,
):
self._node = node
# Listener of tf2 transforms is shared with the owner
self.__tf2_listener = tf2_listener
# Parameters
self._reference_frame_id = reference_frame_id
self._min_bound = min_bound
self._max_bound = max_bound
self._include_color = include_color
self._include_intensity = include_intensity
self._normals_radius = normals_radius
self._normals_max_nn = normals_max_nn
self._debug_draw = debug_draw
self._debug_write_octree = debug_write_octree
# Create a converter between points and octree
self._points_to_octree = ocnn.Points2Octree(
depth=depth,
full_depth=full_depth,
node_dis=node_dis,
node_feature=node_feature,
split_label=split_label,
adaptive=adaptive,
adp_depth=adp_depth,
th_normal=th_normal,
th_distance=th_distance,
extrapolate=extrapolate,
save_pts=save_pts,
key2xyz=key2xyz,
bb_min=min_bound,
bb_max=max_bound,
)
def __call__(self, ros_point_cloud2: PointCloud2) -> torch.Tensor:
# Convert to Open3D PointCloud
open3d_point_cloud = conversions.pointcloud2_to_open3d(
ros_point_cloud2=ros_point_cloud2,
include_color=self._include_color,
include_intensity=self._include_intensity,
)
# Preprocess point cloud (transform to robot frame, crop to workspace and estimate normals)
open3d_point_cloud = self.preprocess_point_cloud(
open3d_point_cloud=open3d_point_cloud,
camera_frame_id=ros_point_cloud2.header.frame_id,
reference_frame_id=self._reference_frame_id,
min_bound=self._min_bound,
max_bound=self._max_bound,
normals_radius=self._normals_radius,
normals_max_nn=self._normals_max_nn,
)
# Draw if needed
if self._debug_draw:
open3d.visualization.draw_geometries(
[
open3d_point_cloud,
open3d.geometry.TriangleMesh.create_coordinate_frame(
size=0.2, origin=[0.0, 0.0, 0.0]
),
],
point_show_normal=True,
)
# Construct octree from such point cloud
octree = self.construct_octree(
open3d_point_cloud,
include_color=self._include_color,
include_intensity=self._include_intensity,
)
# Write if needed
if self._debug_write_octree:
ocnn.write_octree(octree, "octree.octree")
return octree
def preprocess_point_cloud(
self,
open3d_point_cloud: open3d.geometry.PointCloud,
camera_frame_id: str,
reference_frame_id: str,
min_bound: List[float],
max_bound: List[float],
normals_radius: float,
normals_max_nn: int,
) -> open3d.geometry.PointCloud:
# Check if point cloud has any points
if not open3d_point_cloud.has_points():
self._node.get_logger().warn(
"Point cloud has no points. Pre-processing skipped."
)
return open3d_point_cloud
# Get transformation from camera to robot and use it to transform point
# cloud into robot's base coordinate frame
if camera_frame_id != reference_frame_id:
transform = self.__tf2_listener.lookup_transform_sync(
target_frame=reference_frame_id, source_frame=camera_frame_id
)
transform_mat = conversions.transform_to_matrix(transform=transform)
open3d_point_cloud = open3d_point_cloud.transform(transform_mat)
# Crop point cloud to include only the workspace
open3d_point_cloud = open3d_point_cloud.crop(
bounding_box=open3d.geometry.AxisAlignedBoundingBox(
min_bound=min_bound, max_bound=max_bound
)
)
# Check if any points remain in the area after cropping
if not open3d_point_cloud.has_points():
self._node.get_logger().warn(
"Point cloud has no points after cropping it to the workspace volume."
)
return open3d_point_cloud
# Estimate normal vector for each cloud point and orient these towards the camera
open3d_point_cloud.estimate_normals(
search_param=open3d.geometry.KDTreeSearchParamHybrid(
radius=normals_radius, max_nn=normals_max_nn
),
fast_normal_computation=True,
)
open3d_point_cloud.orient_normals_towards_camera_location(
camera_location=transform_mat[0:3, 3]
)
return open3d_point_cloud
def construct_octree(
self,
open3d_point_cloud: open3d.geometry.PointCloud,
include_color: bool,
include_intensity: bool,
) -> torch.Tensor:
# In case the point cloud has no points, add a single point
# This is a workaround because I was not able to create an empty octree without getting a segfault
# TODO: Figure out a better way of making an empty octree (it does not occur if setup correctly, so probably not worth it)
if not open3d_point_cloud.has_points():
open3d_point_cloud.points.append(
(
(self._min_bound[0] + self._max_bound[0]) / 2,
(self._min_bound[1] + self._max_bound[1]) / 2,
(self._min_bound[2] + self._max_bound[2]) / 2,
)
)
open3d_point_cloud.normals.append((0.0, 0.0, 0.0))
if include_color or include_intensity:
open3d_point_cloud.colors.append((0.0, 0.0, 0.0))
# Convert open3d point cloud into octree points
octree_points = conversions.open3d_point_cloud_to_octree_points(
open3d_point_cloud=open3d_point_cloud,
include_color=include_color,
include_intensity=include_intensity,
)
# Convert octree points into 1D Tensor (via ndarray)
# Note: Copy of points here is necessary as ndarray would otherwise be immutable
octree_points_ndarray = np.frombuffer(np.copy(octree_points.buffer()), np.uint8)
octree_points_tensor = torch.from_numpy(octree_points_ndarray)
# Finally, create an octree from the points
return self._points_to_octree(octree_points_tensor)

View file

@ -1,81 +0,0 @@
import sys
from threading import Lock, Thread
from typing import Optional, Union
import rclpy
from rclpy.callback_groups import CallbackGroup
from rclpy.executors import SingleThreadedExecutor
from rclpy.node import Node
from rclpy.parameter import Parameter
from rclpy.qos import (
QoSDurabilityPolicy,
QoSHistoryPolicy,
QoSProfile,
QoSReliabilityPolicy,
)
from geometry_msgs.msg import TwistStamped
class TwistSubscriber:
def __init__(
self,
node: Node,
topic: str,
callback_group: Optional[CallbackGroup] = None,
):
self._node = node
self.__observation = TwistStamped()
self._node.create_subscription(
msg_type=TwistStamped,
topic=topic,
callback=self.observation_callback,
qos_profile=QoSProfile(
reliability=QoSReliabilityPolicy.RELIABLE,
durability=QoSDurabilityPolicy.VOLATILE,
history=QoSHistoryPolicy.KEEP_LAST,
depth=1,
),
callback_group=callback_group,
)
self.__observation_mutex = Lock()
self.__new_observation_available = False
def observation_callback(self, msg):
"""
Callback for getting observation.
"""
self.__observation_mutex.acquire()
self.__observation = msg
self.__new_observation_available = True
self._node.get_logger().debug("New observation received.")
self.__observation_mutex.release()
def get_observation(self) -> TwistStamped:
"""
Get the last received observation.
"""
self.__observation_mutex.acquire()
observation = self.__observation
self.__observation_mutex.release()
return observation
def reset_new_observation_checker(self):
"""
Reset checker of new observations, i.e. `self.new_observation_available()`
"""
self.__observation_mutex.acquire()
self.__new_observation_available = False
self.__observation_mutex.release()
@property
def new_observation_available(self):
"""
Check if new observation is available since `self.reset_new_observation_checker()` was called
"""
return self.__new_observation_available

View file

@ -1 +0,0 @@
from .manipulation import ManipulationGazeboEnvRandomizer

View file

@ -1,345 +0,0 @@
import abc
from env_manager.models.configs import (
SceneData,
)
from env_manager.scene import Scene
from env_manager.utils.logging import set_log_level
from gym_gz import randomizers
from gym_gz.randomizers import gazebo_env_randomizer
from gym_gz.randomizers.gazebo_env_randomizer import MakeEnvCallable
from scenario import gazebo as scenario
from scenario.bindings.gazebo import GazeboSimulator
from rbs_gym.envs import tasks
SupportedTasks = tasks.Reach | tasks.ReachColorImage | tasks.ReachDepthImage
class ManipulationGazeboEnvRandomizer(
gazebo_env_randomizer.GazeboEnvRandomizer,
randomizers.abc.PhysicsRandomizer,
randomizers.abc.TaskRandomizer,
abc.ABC,
):
"""
Basic randomizer of environments for robotic manipulation inside Ignition Gazebo. This randomizer
also populates the simulated world with robot, terrain, lighting and other entities.
"""
POST_RANDOMIZATION_MAX_STEPS = 50
metadata = {"render_modes": ["human"]}
def __init__(
self,
env: MakeEnvCallable,
scene_args: SceneData = SceneData(),
**kwargs,
):
self._scene_data = scene_args
# TODO (a lot of work): Implement proper physics randomization.
if scene_args.physics_rollouts_num != 0:
raise TypeError(
"Proper physics randomization at each reset is not yet implemented. Please set `physics_rollouts_num=0`."
)
# Update kwargs before passing them to the task constructor (some tasks might need them)
# TODO: update logic when will need choose between cameras
cameras: list[dict[str, str | int]] = []
for camera in self._scene_data.camera:
camera_info: dict[str, str | int] = {}
camera_info["name"] = camera.name
camera_info["type"] = camera.type
camera_info["width"] = camera.width
camera_info["height"] = camera.height
cameras.append(camera_info)
kwargs.update({"camera_info": cameras})
# Initialize base classes
randomizers.abc.TaskRandomizer.__init__(self)
randomizers.abc.PhysicsRandomizer.__init__(
self, randomize_after_rollouts_num=scene_args.physics_rollouts_num
)
gazebo_env_randomizer.GazeboEnvRandomizer.__init__(
self, env=env, physics_randomizer=self, **kwargs
)
self.__env_initialised = False
##########################
# PhysicsRandomizer impl #
##########################
def init_physics_preset(self, task: SupportedTasks):
self.set_gravity(task=task)
def randomize_physics(self, task: SupportedTasks, **kwargs):
self.set_gravity(task=task)
def set_gravity(self, task: SupportedTasks):
if not task.world.to_gazebo().set_gravity(
(
task.np_random.normal(loc=self._gravity[0], scale=self._gravity_std[0]),
task.np_random.normal(loc=self._gravity[1], scale=self._gravity_std[1]),
task.np_random.normal(loc=self._gravity[2], scale=self._gravity_std[2]),
)
):
raise RuntimeError("Failed to set the gravity")
def get_engine(self):
return scenario.PhysicsEngine_dart
#######################
# TaskRandomizer impl #
#######################
def randomize_task(self, task: SupportedTasks, **kwargs):
"""
Randomization of the task, which is called on each reset of the environment.
Note that this randomizer reset is called before `reset_task()`.
"""
# Get gazebo instance associated with the task
if "gazebo" not in kwargs:
raise ValueError("Randomizer does not have access to the gazebo interface")
if isinstance(kwargs["gazebo"], GazeboSimulator):
gazebo: GazeboSimulator = kwargs["gazebo"]
else:
raise RuntimeError("Provide GazeboSimulator instance")
# Initialise the environment on the first iteration
if not self.__env_initialised:
self.init_env(task=task, gazebo=gazebo)
self.__env_initialised = True
# Perform pre-randomization steps
self.pre_randomization(task=task)
self._scene.reset_scene()
# Perform post-randomization steps
# TODO: Something in post-randomization causes GUI client to freeze during reset (the simulation server still works fine)
self.post_randomization(task, gazebo)
###################
# Randomizer impl #
###################
# Initialisation #
def init_env(self, task: SupportedTasks, gazebo: scenario.GazeboSimulator):
"""
Initialise an instance of the environment before the very first iteration
"""
self._scene = Scene(
task,
gazebo,
self._scene_data,
task.get_parameter("robot_description").get_parameter_value().string_value,
)
# Init Scene for task
task.scene = self._scene
# Set log level for (Gym) Ignition
set_log_level(log_level=task.get_logger().get_effective_level().name)
self._scene.init_scene()
# Pre-randomization #
def pre_randomization(self, task: SupportedTasks):
"""
Perform steps that are required before randomization is performed.
"""
for obj in self._scene.objects:
# If desired, select random spawn position from the segments
# It is performed here because object spawn position might be of interest also for robot and camera randomization
segments_len = len(obj.randomize.random_spawn_position_segments)
if segments_len > 1:
# Randomly select a segment between two points
start_index = task.np_random.randint(segments_len - 1)
segment = (
obj.randomize.random_spawn_position_segments[start_index],
obj.randomize.random_spawn_position_segments[start_index + 1],
)
# Randomly select a point on the segment and use it as the new object spawn position
intersect = task.np_random.random()
direction = (
segment[1][0] - segment[0][0],
segment[1][1] - segment[0][1],
segment[1][2] - segment[0][2],
)
obj.position = (
segment[0][0] + intersect * direction[0],
segment[0][1] + intersect * direction[1],
segment[0][2] + intersect * direction[2],
)
# TODO: add bounding box with multiple objects
# # Update also the workspace centre (and bounding box) if desired
# if self._object_random_spawn_position_update_workspace_centre:
# task.workspace_centre = (
# self._object_spawn_position[0],
# self._object_spawn_position[1],
# # Z workspace is currently kept the same on purpose
# task.workspace_centre[2],
# )
# workspace_volume_half = (
# task.workspace_volume[0] / 2,
# task.workspace_volume[1] / 2,
# task.workspace_volume[2] / 2,
# )
# task.workspace_min_bound = (
# task.workspace_centre[0] - workspace_volume_half[0],
# task.workspace_centre[1] - workspace_volume_half[1],
# task.workspace_centre[2] - workspace_volume_half[2],
# )
# task.workspace_max_bound = (
# task.workspace_centre[0] + workspace_volume_half[0],
# task.workspace_centre[1] + workspace_volume_half[1],
# task.workspace_centre[2] + workspace_volume_half[2],
# )
def post_randomization(
self, task: SupportedTasks, gazebo: scenario.GazeboSimulator
):
"""
Perform steps required once randomization is complete and the simulation can be stepped unpaused.
"""
def perform_gazebo_step():
if not gazebo.step():
raise RuntimeError("Failed to execute an unpaused Gazebo step")
def wait_for_new_observations():
attempts = 0
while True:
attempts += 1
if attempts % self.POST_RANDOMIZATION_MAX_STEPS == 0:
task.get_logger().debug(
f"Waiting for new joint state after reset. Iteration #{attempts}..."
)
else:
task.get_logger().debug("Waiting for new joint state after reset.")
perform_gazebo_step()
# If camera_sub is defined, ensure all new observations are available
if hasattr(task, "camera_subs") and not all(
sub.new_observation_available for sub in task.camera_subs
):
continue
return # Observations are ready
attempts = 0
processed_objects = set()
# Early exit if the maximum number of steps is already reached
if self.POST_RANDOMIZATION_MAX_STEPS == 0:
task.get_logger().error(
"Robot keeps falling through the terrain. There is something wrong..."
)
return
# Ensure no objects are overlapping
for obj in self._scene.objects:
if not obj.randomize.random_pose or obj.name in processed_objects:
continue
# Try repositioning until no overlap or maximum attempts reached
for _ in range(self.POST_RANDOMIZATION_MAX_STEPS):
task.get_logger().debug(
f"Checking overlap for {obj.name}, attempt {attempts + 1}"
)
if self._scene.check_object_overlapping(obj):
processed_objects.add(obj.name)
break # No overlap, move to next object
else:
task.get_logger().debug(
f"Objects overlapping, trying new positions for {obj.name}"
)
perform_gazebo_step()
else:
task.get_logger().warn(
f"Could not place {obj.name} without overlap after {self.POST_RANDOMIZATION_MAX_STEPS} attempts"
)
continue # Move to next object in case of failure
# Execute steps until new observations are available
if hasattr(task, "camera_subs") or task._enable_gripper:
wait_for_new_observations()
# Final check if observations are not available within the maximum steps
if self.POST_RANDOMIZATION_MAX_STEPS == attempts:
task.get_logger().error("Cannot obtain new observation.")
# =============================
# Additional features and debug
# =============================
# def visualise_workspace(
# self,
# task: SupportedTasks,
# gazebo: scenario.GazeboSimulator,
# color: Tuple[float, float, float, float] = (0, 1, 0, 0.8),
# ):
# # Insert a translucent box visible only in simulation with no physical interactions
# models.Box(
# world=task.world,
# name="_workspace_volume",
# position=self._object_spawn_position,
# orientation=(0, 0, 0, 1),
# size=task.workspace_volume,
# collision=False,
# visual=True,
# gui_only=True,
# static=True,
# color=color,
# )
# # Execute a paused run to process model insertion
# if not gazebo.run(paused=True):
# raise RuntimeError("Failed to execute a paused Gazebo run")
#
# def visualise_spawn_volume(
# self,
# task: SupportedTasks,
# gazebo: scenario.GazeboSimulator,
# color: Tuple[float, float, float, float] = (0, 0, 1, 0.8),
# color_with_height: Tuple[float, float, float, float] = (1, 0, 1, 0.7),
# ):
# # Insert translucent boxes visible only in simulation with no physical interactions
# models.Box(
# world=task.world,
# name="_object_random_spawn_volume",
# position=self._object_spawn_position,
# orientation=(0, 0, 0, 1),
# size=self._object_random_spawn_volume,
# collision=False,
# visual=True,
# gui_only=True,
# static=True,
# color=color,
# )
# models.Box(
# world=task.world,
# name="_object_random_spawn_volume_with_height",
# position=self._object_spawn_position,
# orientation=(0, 0, 0, 1),
# size=self._object_random_spawn_volume,
# collision=False,
# visual=True,
# gui_only=True,
# static=True,
# color=color_with_height,
# )
# # Execute a paused run to process model insertion
# if not gazebo.run(paused=True):
# raise RuntimeError("Failed to execute a paused Gazebo run")

View file

@ -1,4 +0,0 @@
# from .curriculums import *
# from .grasp import *
# from .grasp_planetary import *
from .reach import *

View file

@ -1 +0,0 @@
from .grasp import GraspCurriculum

View file

@ -1,700 +0,0 @@
from __future__ import annotations
import enum
import itertools
import math
from collections import deque
from typing import Callable, Deque, Dict, Optional, Tuple, Type
import numpy as np
from gym_gz.base.task import Task
from gym_gz.utils.typing import Reward
from tf2_ros.buffer_interface import TypeException
INFO_MEAN_STEP_KEY: str = "__mean_step__"
INFO_MEAN_EPISODE_KEY: str = "__mean_episode__"
@enum.unique
class CurriculumStage(enum.Enum):
"""
Ordered enum that represents stages of a curriculum for RL task.
"""
@classmethod
def first(self) -> CurriculumStage:
return self(1)
@classmethod
def last(self) -> CurriculumStage:
return self(len(self))
def next(self) -> Optional[CurriculumStage]:
next_value = self.value + 1
if next_value > self.last().value:
return None
else:
return self(next_value)
def previous(self) -> Optional[CurriculumStage]:
previous_value = self.value - 1
if previous_value < self.first().value:
return None
else:
return self(previous_value)
class StageRewardCurriculum:
"""
Curriculum that begins to compute rewards for a stage once all previous stages are complete.
"""
PERSISTENT_ID: str = "PERSISTENT"
INFO_CURRICULUM_PREFIX: str = "curriculum/"
def __init__(
self,
curriculum_stage: Type[CurriculumStage],
stage_reward_multiplier: float,
dense_reward: bool = False,
**kwargs,
):
if 0 == len(curriculum_stage):
raise TypeException(f"{curriculum_stage} has length of 0")
self.__use_dense_reward = dense_reward
if self.__use_dense_reward:
raise ValueError(
"Dense reward is currently not implemented for any curriculum"
)
# Setup internals
self._stage_type = curriculum_stage
self._stage_reward_functions: Dict[curriculum_stage, Callable] = {
curriculum_stage(stage): getattr(self, f"get_reward_{stage.name}")
for stage in iter(curriculum_stage)
}
self.__stage_reward_multipliers: Dict[curriculum_stage, float] = {
curriculum_stage(stage): stage_reward_multiplier ** (stage.value - 1)
for stage in iter(curriculum_stage)
}
self.stages_completed_this_episode: Dict[curriculum_stage, bool] = {
curriculum_stage(stage): False for stage in iter(curriculum_stage)
}
self.__stages_rewards_this_episode: Dict[curriculum_stage, float] = {
curriculum_stage(stage): 0.0 for stage in iter(curriculum_stage)
}
self.__stages_rewards_this_episode[self.PERSISTENT_ID] = 0.0
self.__episode_succeeded: bool = False
self.__episode_failed: bool = False
def get_reward(self, only_last_stage: bool = False, **kwargs) -> Reward:
reward = 0.0
# Determine the stage at which to start computing reward [performance - done stages give no reward]
if only_last_stage:
first_stage_to_process = self._stage_type.last()
else:
for stage in iter(self._stage_type):
if not self.stages_completed_this_episode[stage]:
first_stage_to_process = stage
break
# Iterate over all stages that might need to be processed
for stage in range(first_stage_to_process.value, len(self._stage_type) + 1):
stage = self._stage_type(stage)
# Compute reward for the current stage
stage_reward = self._stage_reward_functions[stage](**kwargs)
# Multiply by the reward multiplier
stage_reward *= self.__stage_reward_multipliers[stage]
# Add to the total step reward
reward += stage_reward
# Add reward to the list for info
self.__stages_rewards_this_episode[stage] += stage_reward
# Break if stage is not yet completed [performance - next stages won't give any reward]
if not self.stages_completed_this_episode[stage]:
break
# If the last stage is complete, the episode has succeeded
self.__episode_succeeded = self.stages_completed_this_episode[
self._stage_type.last()
]
if self.__episode_succeeded:
return reward
# Add persistent reward that is added regardless of the episode (unless task already succeeded)
persistent_reward = self.get_persistent_reward(**kwargs)
# Add to the total step reward
reward += persistent_reward
# Add reward to the list for info
self.__stages_rewards_this_episode[self.PERSISTENT_ID] += persistent_reward
return reward
def is_done(self) -> bool:
if self.__episode_succeeded:
# The episode ended with success
self.on_episode_success()
return True
elif self.__episode_failed:
# The episode ended due to failure
self.on_episode_failure()
return True
else:
# Otherwise, the episode is not yet done
return False
def get_info(self) -> Dict:
# Whether the episode succeeded
info = {
"is_success": self.__episode_succeeded,
}
# What stage was reached during this episode so far
for stage in iter(self._stage_type):
reached_stage = stage
if not self.stages_completed_this_episode[stage]:
break
info.update(
{
f"{self.INFO_CURRICULUM_PREFIX}{INFO_MEAN_EPISODE_KEY}ep_reached_stage_mean": reached_stage.value,
}
)
# Rewards for the individual stages
info.update(
{
f"{self.INFO_CURRICULUM_PREFIX}{INFO_MEAN_EPISODE_KEY}ep_rew_mean_{stage.value}_{stage.name.lower()}": self.__stages_rewards_this_episode[
stage
]
for stage in iter(self._stage_type)
}
)
info.update(
{
f"{self.INFO_CURRICULUM_PREFIX}{INFO_MEAN_EPISODE_KEY}ep_rew_mean_{self.PERSISTENT_ID.lower()}": self.__stages_rewards_this_episode[
self.PERSISTENT_ID
]
}
)
return info
def reset_task(self):
if not (self.__episode_succeeded or self.__episode_failed):
# The episode ended due to timeout
self.on_episode_timeout()
# Reset internals
self.stages_completed_this_episode = dict.fromkeys(
self.stages_completed_this_episode, False
)
self.__stages_rewards_this_episode = dict.fromkeys(
self.__stages_rewards_this_episode, 0.0
)
self.__stages_rewards_this_episode[self.PERSISTENT_ID] = 0.0
self.__episode_succeeded = False
self.__episode_failed = False
@property
def episode_succeeded(self) -> bool:
return self.__episode_succeeded
@episode_succeeded.setter
def episode_succeeded(self, value: bool):
self.__episode_succeeded = value
@property
def episode_failed(self) -> bool:
return self.__episode_failed
@episode_failed.setter
def episode_failed(self, value: bool):
self.__episode_failed = value
@property
def use_dense_reward(self) -> bool:
return self.__use_dense_reward
def get_persistent_reward(self, **kwargs) -> float:
"""
Virtual method.
"""
reward = 0.0
return reward
def on_episode_success(self):
"""
Virtual method.
"""
pass
def on_episode_failure(self):
"""
Virtual method.
"""
pass
def on_episode_timeout(self):
"""
Virtual method.
"""
pass
class SuccessRateImpl:
"""
Moving average over the success rate of last N episodes.
"""
INFO_CURRICULUM_PREFIX: str = "curriculum/"
def __init__(
self,
initial_success_rate: float = 0.0,
rolling_average_n: int = 100,
**kwargs,
):
self.__success_rate = initial_success_rate
self.__rolling_average_n = rolling_average_n
# Setup internals
self.__previous_success_rate_weight: int = 0
self.__collected_samples: int = 0
def get_info(self) -> Dict:
info = {
f"{self.INFO_CURRICULUM_PREFIX}_success_rate": self.__success_rate,
}
return info
def update_success_rate(self, is_success: bool):
# Until `rolling_average_n` is reached, use number of collected samples during computations
if self.__collected_samples < self.__rolling_average_n:
self.__previous_success_rate_weight = self.__collected_samples
self.__collected_samples += 1
self.__success_rate = (
self.__previous_success_rate_weight * self.__success_rate
+ float(is_success)
) / self.__collected_samples
@property
def success_rate(self) -> float:
return self.__success_rate
class WorkspaceScaleCurriculum:
"""
Curriculum that increases the workspace size as the success rate increases.
"""
INFO_CURRICULUM_PREFIX: str = "curriculum/"
def __init__(
self,
task: Task,
success_rate_impl: SuccessRateImpl,
min_workspace_scale: float,
max_workspace_volume: Tuple[float, float, float],
max_workspace_scale_success_rate_threshold: float,
**kwargs,
):
self.__task = task
self.__success_rate_impl = success_rate_impl
self.__min_workspace_scale = min_workspace_scale
self.__max_workspace_volume = max_workspace_volume
self.__max_workspace_scale_success_rate_threshold = (
max_workspace_scale_success_rate_threshold
)
def get_info(self) -> Dict:
info = {
f"{self.INFO_CURRICULUM_PREFIX}{INFO_MEAN_EPISODE_KEY}workspace_scale": self.__workspace_scale,
}
return info
def reset_task(self):
# Update workspace size
self.__update_workspace_size()
def __update_workspace_size(self):
self.__workspace_scale = min(
1.0,
max(
self.__min_workspace_scale,
self.__success_rate_impl.success_rate
/ self.__max_workspace_scale_success_rate_threshold,
),
)
workspace_volume_new = (
self.__workspace_scale * self.__max_workspace_volume[0],
self.__workspace_scale * self.__max_workspace_volume[1],
# Z workspace is currently kept the same on purpose
self.__max_workspace_volume[2],
)
workspace_volume_half_new = (
workspace_volume_new[0] / 2,
workspace_volume_new[1] / 2,
workspace_volume_new[2] / 2,
)
workspace_min_bound_new = (
self.__task.workspace_centre[0] - workspace_volume_half_new[0],
self.__task.workspace_centre[1] - workspace_volume_half_new[1],
self.__task.workspace_centre[2] - workspace_volume_half_new[2],
)
workspace_max_bound_new = (
self.__task.workspace_centre[0] + workspace_volume_half_new[0],
self.__task.workspace_centre[1] + workspace_volume_half_new[1],
self.__task.workspace_centre[2] + workspace_volume_half_new[2],
)
self.__task.add_task_parameter_overrides(
{
"workspace_volume": workspace_volume_new,
"workspace_min_bound": workspace_min_bound_new,
"workspace_max_bound": workspace_max_bound_new,
}
)
class ObjectSpawnVolumeScaleCurriculum:
"""
Curriculum that increases the object spawn volume as the success rate increases.
"""
INFO_CURRICULUM_PREFIX: str = "curriculum/"
def __init__(
self,
task: Task,
success_rate_impl: SuccessRateImpl,
min_object_spawn_volume_scale: float,
max_object_spawn_volume: Tuple[float, float, float],
max_object_spawn_volume_scale_success_rate_threshold: float,
**kwargs,
):
self.__task = task
self.__success_rate_impl = success_rate_impl
self.__min_object_spawn_volume_scale = min_object_spawn_volume_scale
self.__max_object_spawn_volume = max_object_spawn_volume
self.__max_object_spawn_volume_scale_success_rate_threshold = (
max_object_spawn_volume_scale_success_rate_threshold
)
def get_info(self) -> Dict:
info = {
f"{self.INFO_CURRICULUM_PREFIX}{INFO_MEAN_EPISODE_KEY}object_spawn_volume_scale": self.__object_spawn_volume_scale,
}
return info
def reset_task(self):
# Update object_spawn_volume size
self.__update_object_spawn_volume_size()
def __update_object_spawn_volume_size(self):
self.__object_spawn_volume_scale = min(
1.0,
max(
self.__min_object_spawn_volume_scale,
self.__success_rate_impl.success_rate
/ self.__max_object_spawn_volume_scale_success_rate_threshold,
),
)
object_spawn_volume_volume_new = (
self.__object_spawn_volume_scale * self.__max_object_spawn_volume[0],
self.__object_spawn_volume_scale * self.__max_object_spawn_volume[1],
self.__object_spawn_volume_scale * self.__max_object_spawn_volume[2],
)
self.__task.add_randomizer_parameter_overrides(
{
"object_random_spawn_volume": object_spawn_volume_volume_new,
}
)
class ObjectCountCurriculum:
"""
Curriculum that increases the number of objects as the success rate increases.
"""
INFO_CURRICULUM_PREFIX: str = "curriculum/"
def __init__(
self,
task: Task,
success_rate_impl: SuccessRateImpl,
object_count_min: int,
object_count_max: int,
max_object_count_success_rate_threshold: float,
**kwargs,
):
self.__task = task
self.__success_rate_impl = success_rate_impl
self.__object_count_min = object_count_min
self.__object_count_max = object_count_max
self.__max_object_count_success_rate_threshold = (
max_object_count_success_rate_threshold
)
self.__object_count_min_max_diff = object_count_max - object_count_min
if self.__object_count_min_max_diff < 0:
raise Exception(
"'object_count_min' cannot be larger than 'object_count_max'"
)
def get_info(self) -> Dict:
info = {
f"{self.INFO_CURRICULUM_PREFIX}object_count": self.__object_count,
}
return info
def reset_task(self):
# Update object count
self.__update_object_count()
def __update_object_count(self):
self.__object_count = min(
self.__object_count_max,
math.floor(
self.__object_count_min
+ (
self.__success_rate_impl.success_rate
/ self.__max_object_count_success_rate_threshold
)
* self.__object_count_min_max_diff
),
)
self.__task.add_randomizer_parameter_overrides(
{
"object_count": self.__object_count,
}
)
class ArmStuckChecker:
"""
Checker for arm getting stuck.
"""
INFO_CURRICULUM_PREFIX: str = "curriculum/"
def __init__(
self,
task: Task,
arm_stuck_n_steps: int,
arm_stuck_min_joint_difference_norm: float,
**kwargs,
):
self.__task = task
self.__arm_stuck_min_joint_difference_norm = arm_stuck_min_joint_difference_norm
# List of previous join positions (used to compute difference norm with an older previous reading)
self.__previous_joint_positions: Deque[np.ndarray] = deque(
[], maxlen=arm_stuck_n_steps
)
# Counter of how many time the robot got stuck
self.__robot_stuck_total_counter: int = 0
# Initialize list of indices for the arm.
# It is assumed that these indices do not change during the operation
self.__arm_joint_indices = None
def get_info(self) -> Dict:
info = {
f"{self.INFO_CURRICULUM_PREFIX}robot_stuck_count": self.__robot_stuck_total_counter,
}
return info
def reset_task(self):
self.__previous_joint_positions.clear()
joint_positions = self.__get_arm_joint_positions()
if joint_positions is not None:
self.__previous_joint_positions.append(joint_positions)
def is_robot_stuck(self) -> bool:
# Get current position and append to the list of previous ones
current_joint_positions = self.__get_arm_joint_positions()
if current_joint_positions is not None:
self.__previous_joint_positions.append(current_joint_positions)
# Stop checking if there is not yet enough entries in the list
if (
len(self.__previous_joint_positions)
< self.__previous_joint_positions.maxlen
):
return False
# Make sure the length of joint position matches
if len(current_joint_positions) != len(self.__previous_joint_positions[0]):
return False
# Compute joint difference norm only with the `t - arm_stuck_n_steps` entry first (performance reason)
joint_difference_norm = np.linalg.norm(
current_joint_positions - self.__previous_joint_positions[0]
)
# If the difference is large enough, the arm does not appear to be stuck, so skip computing all other entries
if joint_difference_norm > self.__arm_stuck_min_joint_difference_norm:
return False
# If it is too small, consider all other entries as well
joint_difference_norms = np.linalg.norm(
current_joint_positions
- list(itertools.islice(self.__previous_joint_positions, 1, None)),
axis=1,
)
# Return true (stuck) if all joint difference entries are too small
is_stuck = all(
joint_difference_norms < self.__arm_stuck_min_joint_difference_norm
)
self.__robot_stuck_total_counter += int(is_stuck)
return is_stuck
def __get_arm_joint_positions(self) -> Optional[np.ndarray[float]]:
joint_state = self.__task.moveit2.joint_state
if joint_state is None:
return None
if self.__arm_joint_indices is None:
self.__arm_joint_indices = [
i
for i, joint_name in enumerate(joint_state.name)
if joint_name in self.__task.robot_arm_joint_names
]
return np.take(joint_state.position, self.__arm_joint_indices)
class AttributeCurriculum:
"""
Curriculum that increases the value of an attribute (e.g. requirement) as the success rate increases.
Currently support only attributes that are increasing.
"""
INFO_CURRICULUM_PREFIX: str = "curriculum/"
def __init__(
self,
success_rate_impl: SuccessRateImpl,
attribute_owner: Type,
attribute_name: str,
initial_value: float,
target_value: float,
target_value_threshold: float,
**kwargs,
):
self.__success_rate_impl = success_rate_impl
self.__attribute_owner = attribute_owner
self.__attribute_name = attribute_name
self.__initial_value = initial_value
self.__target_value_threshold = target_value_threshold
# Initialise current value of the attribute
self.__current_value = initial_value
# Store difference for faster computations
self.__value_diff = target_value - initial_value
def get_info(self) -> Dict:
info = {
f"{self.INFO_CURRICULUM_PREFIX}{self.__attribute_name}": self.__current_value,
}
return info
def reset_task(self):
# Update object count
self.__update_attribute()
def __update_attribute(self):
scale = min(
1.0,
max(
self.__initial_value,
self.__success_rate_impl.success_rate / self.__target_value_threshold,
),
)
self.__current_value = self.__initial_value + (scale * self.__value_diff)
if hasattr(self.__attribute_owner, self.__attribute_name):
setattr(self.__attribute_owner, self.__attribute_name, self.__current_value)
elif hasattr(self.__attribute_owner, f"_{self.__attribute_name}"):
setattr(
self.__attribute_owner,
f"_{self.__attribute_name}",
self.__current_value,
)
elif hasattr(self.__attribute_owner, f"__{self.__attribute_name}"):
setattr(
self.__attribute_owner,
f"__{self.__attribute_name}",
self.__current_value,
)
else:
raise Exception(
f"Attribute owner '{self.__attribute_owner}' does not have any attribute named {self.__attribute_name}."
)

View file

@ -1,341 +0,0 @@
from typing import Dict, List, Tuple
from gym_gz.base.task import Task
from rbs_gym.envs.tasks.curriculums.common import *
from rbs_gym.envs.utils.math import distance_to_nearest_point
class GraspStage(CurriculumStage):
"""
Ordered enum that represents stages of a curriculum for Grasp (and GraspPlanetary) task.
"""
REACH = 1
TOUCH = 2
GRASP = 3
LIFT = 4
class GraspCurriculum(
StageRewardCurriculum,
SuccessRateImpl,
WorkspaceScaleCurriculum,
ObjectSpawnVolumeScaleCurriculum,
ObjectCountCurriculum,
ArmStuckChecker,
):
"""
Curriculum learning implementation for grasp task that provides termination (success/fail) and reward for each stage of the task.
"""
def __init__(
self,
task: Task,
stages_base_reward: float,
reach_required_distance: float,
lift_required_height: float,
persistent_reward_each_step: float,
persistent_reward_terrain_collision: float,
persistent_reward_all_objects_outside_workspace: float,
persistent_reward_arm_stuck: float,
enable_stage_reward_curriculum: bool,
enable_workspace_scale_curriculum: bool,
enable_object_spawn_volume_scale_curriculum: bool,
enable_object_count_curriculum: bool,
reach_required_distance_min: Optional[float] = None,
reach_required_distance_max: Optional[float] = None,
reach_required_distance_max_threshold: Optional[float] = None,
lift_required_height_min: Optional[float] = None,
lift_required_height_max: Optional[float] = None,
lift_required_height_max_threshold: Optional[float] = None,
**kwargs,
):
StageRewardCurriculum.__init__(self, curriculum_stage=GraspStage, **kwargs)
SuccessRateImpl.__init__(self, **kwargs)
WorkspaceScaleCurriculum.__init__(
self, task=task, success_rate_impl=self, **kwargs
)
ObjectSpawnVolumeScaleCurriculum.__init__(
self, task=task, success_rate_impl=self, **kwargs
)
ObjectCountCurriculum.__init__(
self, task=task, success_rate_impl=self, **kwargs
)
ArmStuckChecker.__init__(self, task=task, **kwargs)
# Grasp task/environment that will be used to extract information from the scene
self.__task = task
# Parameters
self.__stages_base_reward = stages_base_reward
self.reach_required_distance = reach_required_distance
self.lift_required_height = lift_required_height
self.__persistent_reward_each_step = persistent_reward_each_step
self.__persistent_reward_terrain_collision = persistent_reward_terrain_collision
self.__persistent_reward_all_objects_outside_workspace = (
persistent_reward_all_objects_outside_workspace
)
self.__persistent_reward_arm_stuck = persistent_reward_arm_stuck
self.__enable_stage_reward_curriculum = enable_stage_reward_curriculum
self.__enable_workspace_scale_curriculum = enable_workspace_scale_curriculum
self.__enable_object_spawn_volume_scale_curriculum = (
enable_object_spawn_volume_scale_curriculum
)
self.__enable_object_count_curriculum = enable_object_count_curriculum
# Make sure that the persistent rewards for each step are negative
if self.__persistent_reward_each_step > 0.0:
self.__persistent_reward_each_step *= -1.0
if self.__persistent_reward_terrain_collision > 0.0:
self.__persistent_reward_terrain_collision *= -1.0
if self.__persistent_reward_all_objects_outside_workspace > 0.0:
self.__persistent_reward_all_objects_outside_workspace *= -1.0
if self.__persistent_reward_arm_stuck > 0.0:
self.__persistent_reward_arm_stuck *= -1.0
# Setup curriculum for Reach distance requirement (if enabled)
reach_required_distance_min = (
reach_required_distance_min
if reach_required_distance_min is not None
else reach_required_distance
)
reach_required_distance_max = (
reach_required_distance_max
if reach_required_distance_max is not None
else reach_required_distance
)
reach_required_distance_max_threshold = (
reach_required_distance_max_threshold
if reach_required_distance_max_threshold is not None
else 0.5
)
self.__reach_required_distance_curriculum_enabled = (
not reach_required_distance_min == reach_required_distance_max
)
if self.__reach_required_distance_curriculum_enabled:
self.__reach_required_distance_curriculum = AttributeCurriculum(
success_rate_impl=self,
attribute_owner=self,
attribute_name="reach_required_distance",
initial_value=reach_required_distance_min,
target_value=reach_required_distance_max,
target_value_threshold=reach_required_distance_max_threshold,
)
# Setup curriculum for Lift height requirement (if enabled)
lift_required_height_min = (
lift_required_height_min
if lift_required_height_min is not None
else lift_required_height
)
lift_required_height_max = (
lift_required_height_max
if lift_required_height_max is not None
else lift_required_height
)
lift_required_height_max_threshold = (
lift_required_height_max_threshold
if lift_required_height_max_threshold is not None
else 0.5
)
# Offset Lift height requirement by the robot base offset
lift_required_height += task.robot_model_class.BASE_LINK_Z_OFFSET
lift_required_height_min += task.robot_model_class.BASE_LINK_Z_OFFSET
lift_required_height_max += task.robot_model_class.BASE_LINK_Z_OFFSET
lift_required_height_max_threshold += task.robot_model_class.BASE_LINK_Z_OFFSET
self.__lift_required_height_curriculum_enabled = (
not lift_required_height_min == lift_required_height_max
)
if self.__lift_required_height_curriculum_enabled:
self.__lift_required_height_curriculum = AttributeCurriculum(
success_rate_impl=self,
attribute_owner=self,
attribute_name="lift_required_height",
initial_value=lift_required_height_min,
target_value=lift_required_height_max,
target_value_threshold=lift_required_height_max_threshold,
)
def get_reward(self) -> Reward:
if self.__enable_stage_reward_curriculum:
# Try to get reward from each stage
return StageRewardCurriculum.get_reward(
self,
ee_position=self.__task.get_ee_position(),
object_positions=self.__task.get_object_positions(),
touched_objects=self.__task.get_touched_objects(),
grasped_objects=self.__task.get_grasped_objects(),
)
else:
# If curriculum for stages is disabled, compute reward only for the last stage
return StageRewardCurriculum.get_reward(
self,
only_last_stage=True,
object_positions=self.__task.get_object_positions(),
grasped_objects=self.__task.get_grasped_objects(),
)
def is_done(self) -> bool:
return StageRewardCurriculum.is_done(self)
def get_info(self) -> Dict:
info = StageRewardCurriculum.get_info(self)
info.update(SuccessRateImpl.get_info(self))
if self.__enable_workspace_scale_curriculum:
info.update(WorkspaceScaleCurriculum.get_info(self))
if self.__enable_object_spawn_volume_scale_curriculum:
info.update(ObjectSpawnVolumeScaleCurriculum.get_info(self))
if self.__enable_object_count_curriculum:
info.update(ObjectCountCurriculum.get_info(self))
if self.__persistent_reward_arm_stuck:
info.update(ArmStuckChecker.get_info(self))
if self.__reach_required_distance_curriculum_enabled:
info.update(self.__reach_required_distance_curriculum.get_info())
if self.__lift_required_height_curriculum_enabled:
info.update(self.__lift_required_height_curriculum.get_info())
return info
def reset_task(self):
StageRewardCurriculum.reset_task(self)
if self.__enable_workspace_scale_curriculum:
WorkspaceScaleCurriculum.reset_task(self)
if self.__enable_object_spawn_volume_scale_curriculum:
ObjectSpawnVolumeScaleCurriculum.reset_task(self)
if self.__enable_object_count_curriculum:
ObjectCountCurriculum.reset_task(self)
if self.__persistent_reward_arm_stuck:
ArmStuckChecker.reset_task(self)
if self.__reach_required_distance_curriculum_enabled:
self.__reach_required_distance_curriculum.reset_task()
if self.__lift_required_height_curriculum_enabled:
self.__lift_required_height_curriculum.reset_task()
def on_episode_success(self):
self.update_success_rate(is_success=True)
def on_episode_failure(self):
self.update_success_rate(is_success=False)
def on_episode_timeout(self):
self.update_success_rate(is_success=False)
def get_reward_REACH(
self,
ee_position: Tuple[float, float, float],
object_positions: Dict[str, Tuple[float, float, float]],
**kwargs,
) -> float:
if not object_positions:
return 0.0
nearest_object_distance = distance_to_nearest_point(
origin=ee_position, points=list(object_positions.values())
)
self.__task.get_logger().debug(
f"[Curriculum] Distance to nearest object: {nearest_object_distance}"
)
if nearest_object_distance < self.reach_required_distance:
self.__task.get_logger().info(
f"[Curriculum] An object is now closer than the required distance of {self.reach_required_distance}"
)
self.stages_completed_this_episode[GraspStage.REACH] = True
return self.__stages_base_reward
else:
return 0.0
def get_reward_TOUCH(self, touched_objects: List[str], **kwargs) -> float:
if touched_objects:
self.__task.get_logger().info(
f"[Curriculum] Touched objects: {touched_objects}"
)
self.stages_completed_this_episode[GraspStage.TOUCH] = True
return self.__stages_base_reward
else:
return 0.0
def get_reward_GRASP(self, grasped_objects: List[str], **kwargs) -> float:
if grasped_objects:
self.__task.get_logger().info(
f"[Curriculum] Grasped objects: {grasped_objects}"
)
self.stages_completed_this_episode[GraspStage.GRASP] = True
return self.__stages_base_reward
else:
return 0.0
def get_reward_LIFT(
self,
object_positions: Dict[str, Tuple[float, float, float]],
grasped_objects: List[str],
**kwargs,
) -> float:
if not (grasped_objects or object_positions):
return 0.0
for grasped_object in grasped_objects:
grasped_object_height = object_positions[grasped_object][2]
self.__task.get_logger().debug(
f"[Curriculum] Height of grasped object '{grasped_objects}': {grasped_object_height}"
)
if grasped_object_height > self.lift_required_height:
self.__task.get_logger().info(
f"[Curriculum] Lifted object: {grasped_object}"
)
self.stages_completed_this_episode[GraspStage.LIFT] = True
return self.__stages_base_reward
return 0.0
def get_persistent_reward(
self, object_positions: Dict[str, Tuple[float, float, float]], **kwargs
) -> float:
# Subtract a small reward each step to provide incentive to act quickly
reward = self.__persistent_reward_each_step
# Negative reward for colliding with terrain
if self.__persistent_reward_terrain_collision:
if self.__task.check_terrain_collision():
self.__task.get_logger().info(
"[Curriculum] Robot collided with the terrain"
)
reward += self.__persistent_reward_terrain_collision
# Negative reward for having all objects outside of the workspace
if self.__persistent_reward_all_objects_outside_workspace:
if self.__task.check_all_objects_outside_workspace(
object_positions=object_positions
):
self.__task.get_logger().warn(
"[Curriculum] All objects are outside of the workspace"
)
reward += self.__persistent_reward_all_objects_outside_workspace
self.episode_failed = True
# Negative reward for arm getting stuck
if self.__persistent_reward_arm_stuck:
if ArmStuckChecker.is_robot_stuck(self):
self.__task.get_logger().error(
f"[Curriculum] Robot appears to be stuck, resetting..."
)
reward += self.__persistent_reward_arm_stuck
self.episode_failed = True
return reward

View file

@ -1,707 +0,0 @@
import abc
import multiprocessing
import sys
from itertools import count
from threading import Thread
from typing import Any, Dict, Optional, Tuple, Union
import numpy as np
import rclpy
from env_manager.models.configs import RobotData
from env_manager.models.robots import RobotWrapper, get_robot_model_class
from env_manager.scene import Scene
from env_manager.utils import Tf2Broadcaster, Tf2Listener
from env_manager.utils.conversions import orientation_6d_to_quat
from env_manager.utils.gazebo import (
Point,
Pose,
Quat,
get_model_orientation,
get_model_pose,
get_model_position,
transform_change_reference_frame_orientation,
transform_change_reference_frame_pose,
transform_change_reference_frame_position,
)
from env_manager.utils.math import quat_mul
from gym_gz.base.task import Task
from gym_gz.scenario.model_wrapper import ModelWrapper
from gym_gz.utils.typing import (
Action,
ActionSpace,
Observation,
ObservationSpace,
Reward,
)
from rclpy.callback_groups import ReentrantCallbackGroup
from rclpy.executors import MultiThreadedExecutor, SingleThreadedExecutor
from rclpy.node import Node
from robot_builder.elements.robot import Robot
from scipy.spatial.transform import Rotation
from rbs_gym.envs.control import (
CartesianForceController,
GripperController,
JointEffortController,
)
class Manipulation(Task, Node, abc.ABC):
_ids = count(0)
def __init__(
self,
agent_rate: float,
robot_model: str,
workspace_frame_id: str,
workspace_centre: Tuple[float, float, float],
workspace_volume: Tuple[float, float, float],
ignore_new_actions_while_executing: bool,
use_servo: bool,
scaling_factor_translation: float,
scaling_factor_rotation: float,
restrict_position_goal_to_workspace: bool,
enable_gripper: bool,
num_threads: int,
**kwargs,
):
# Get next ID for this task instance
self.id = next(self._ids)
# Initialize the Task base class
Task.__init__(self, agent_rate=agent_rate)
# Initialize ROS 2 context (if not done before)
try:
rclpy.init()
except Exception as e:
if not rclpy.ok():
sys.exit(f"ROS 2 context could not be initialised: {e}")
# Initialize ROS 2 Node base class
Node.__init__(self, f"rbs_gym_{self.id}")
self._allow_undeclared_parameters = True
self.declare_parameter("robot_description", "")
# Create callback group that allows execution of callbacks in parallel without restrictions
self._callback_group = ReentrantCallbackGroup()
# Create executor
if num_threads == 1:
executor = SingleThreadedExecutor()
elif num_threads > 1:
executor = MultiThreadedExecutor(
num_threads=num_threads,
)
else:
executor = MultiThreadedExecutor(num_threads=multiprocessing.cpu_count())
# Add this node to the executor
executor.add_node(self)
# Spin this node in background thread(s)
self._executor_thread = Thread(target=executor.spin, daemon=True, args=())
self._executor_thread.start()
# Get class of the robot model based on passed argument
# self.robot_model_class = get_robot_model_class(robot_model)
# Store passed arguments for later use
# self.workspace_centre = (
# workspace_centre[0],
# workspace_centre[1],
# workspace_centre[2] + self.robot_model_class.BASE_LINK_Z_OFFSET,
# )
# self.workspace_volume = workspace_volume
# self._restrict_position_goal_to_workspace = restrict_position_goal_to_workspace
# self._use_servo = use_servo
# self.__scaling_factor_translation = scaling_factor_translation
# self.__scaling_factor_rotation = scaling_factor_rotation
self._enable_gripper = enable_gripper
self._scene: Scene | None = None
#
# # Get workspace bounds, useful is many computations
# workspace_volume_half = (
# workspace_volume[0] / 2,
# workspace_volume[1] / 2,
# workspace_volume[2] / 2,
# )
# self.workspace_min_bound = (
# self.workspace_centre[0] - workspace_volume_half[0],
# self.workspace_centre[1] - workspace_volume_half[1],
# self.workspace_centre[2] - workspace_volume_half[2],
# )
# self.workspace_max_bound = (
# self.workspace_centre[0] + workspace_volume_half[0],
# self.workspace_centre[1] + workspace_volume_half[1],
# self.workspace_centre[2] + workspace_volume_half[2],
# )
# Determine robot name and prefix based on current ID of the task
# self.robot_prefix: str = self.robot_model_class.DEFAULT_PREFIX
# if 0 == self.id:
# self.robot_name = self.robot_model_class.ROBOT_MODEL_NAME
# else:
# self.robot_name = f"{self.robot_model_class.ROBOT_MODEL_NAME}{self.id}"
# if self.robot_prefix.endswith("_"):
# self.robot_prefix = f"{self.robot_prefix[:-1]}{self.id}_"
# elif self.robot_prefix == "":
# self.robot_prefix = f"robot{self.id}_"
# Names of specific robot links, useful all around the code
# self.robot_base_link_name = self.robot_model_class.get_robot_base_link_name(
# self.robot_prefix
# )
# self.robot_arm_base_link_name = self.robot_model_class.get_arm_base_link_name(
# self.robot_prefix
# )
# self.robot_ee_link_name = self.robot_model_class.get_ee_link_name(
# self.robot_prefix
# )
# self.robot_arm_link_names = self.robot_model_class.get_arm_link_names(
# self.robot_prefix
# )
# self.robot_gripper_link_names = self.robot_model_class.get_gripper_link_names(
# self.robot_prefix
# )
# self.robot_arm_joint_names = self.robot_model_class.get_arm_joint_names(
# self.robot_prefix
# )
# self.robot_gripper_joint_names = self.robot_model_class.get_gripper_joint_names(
# self.robot_prefix
# )
# Get exact name substitution of the frame for workspace
self.workspace_frame_id = self.substitute_special_frame(workspace_frame_id)
# Specify initial positions (default configuration is used here)
# self.initial_arm_joint_positions = (
# self.robot_model_class.DEFAULT_ARM_JOINT_POSITIONS
# )
# self.initial_gripper_joint_positions = (
# self.robot_model_class.DEFAULT_GRIPPER_JOINT_POSITIONS
# )
# Names of important models (in addition to robot model)
# Setup listener and broadcaster of transforms via tf2
self.tf2_listener = Tf2Listener(node=self)
self.tf2_broadcaster = Tf2Broadcaster(node=self)
self.cartesian_control = True # TODO: make it as an external parameter
# Setup controllers
self.controller = CartesianForceController(self)
if not self.cartesian_control:
self.joint_controller = JointEffortController(self)
if self._enable_gripper:
self.gripper = GripperController(self, 0.064)
# Initialize task and randomizer overrides (e.g. from curriculum)
# Both of these are consumed at the beginning of reset
self.__task_parameter_overrides: dict[str, Any] = {}
self._randomizer_parameter_overrides: dict[str, Any] = {}
def create_spaces(self) -> Tuple[ActionSpace, ObservationSpace]:
action_space = self.create_action_space()
observation_space = self.create_observation_space()
return action_space, observation_space
def create_action_space(self) -> ActionSpace:
raise NotImplementedError()
def create_observation_space(self) -> ObservationSpace:
raise NotImplementedError()
def set_action(self, action: Action):
raise NotImplementedError()
def get_observation(self) -> Observation:
raise NotImplementedError()
def get_reward(self) -> Reward:
raise NotImplementedError()
def is_done(self) -> bool:
raise NotImplementedError()
def reset_task(self):
# self.__consume_parameter_overrides()
pass
# # Helper functions #
# def get_relative_ee_position(
# self, translation: Tuple[float, float, float]
# ) -> Tuple[float, float, float]:
# # Scale relative action to metric units
# translation = self.scale_relative_translation(translation)
# # Get current position
# current_position = self.get_ee_position()
# # Compute target position
# target_position = (
# current_position[0] + translation[0],
# current_position[1] + translation[1],
# current_position[2] + translation[2],
# )
#
# # Restrict target position to a limited workspace, if desired
# if self._restrict_position_goal_to_workspace:
# target_position = self.restrict_position_goal_to_workspace(target_position)
#
# return target_position
#
# def get_relative_ee_orientation(
# self,
# rotation: Union[
# float,
# Tuple[float, float, float, float],
# Tuple[float, float, float, float, float, float],
# ],
# representation: str = "quat",
# ) -> Tuple[float, float, float, float]:
# # Get current orientation
# current_quat_xyzw = self.get_ee_orientation()
#
# # For 'z' representation, result should always point down
# # Therefore, create a new quatertnion that contains only yaw component
# if "z" == representation:
# current_yaw = Rotation.from_quat(current_quat_xyzw).as_euler("xyz")[2]
# current_quat_xyzw = Rotation.from_euler(
# "xyz", [np.pi, 0, current_yaw]
# ).as_quat()
#
# # Convert relative orientation representation to quaternion
# relative_quat_xyzw = None
# if "quat" == representation:
# relative_quat_xyzw = rotation
# elif "6d" == representation:
# vectors = tuple(
# rotation[x : x + 3] for x, _ in enumerate(rotation) if x % 3 == 0
# )
# relative_quat_xyzw = orientation_6d_to_quat(vectors[0], vectors[1])
# elif "z" == representation:
# rotation = self.scale_relative_rotation(rotation)
# relative_quat_xyzw = Rotation.from_euler("xyz", [0, 0, rotation]).as_quat()
#
# # Compute target position (combine quaternions)
# target_quat_xyzw = quat_mul(current_quat_xyzw, relative_quat_xyzw)
#
# # Normalise quaternion (should not be needed, but just to be safe)
# target_quat_xyzw /= np.linalg.norm(target_quat_xyzw)
#
# return target_quat_xyzw
# def scale_relative_translation(
# self, translation: Tuple[float, float, float]
# ) -> Tuple[float, float, float]:
# return (
# self.__scaling_factor_translation * translation[0],
# self.__scaling_factor_translation * translation[1],
# self.__scaling_factor_translation * translation[2],
# )
# def scale_relative_rotation(
# self,
# rotation: Union[float, Tuple[float, float, float], np.floating, np.ndarray],
# ) -> float:
# if not hasattr(rotation, "__len__"):
# return self.__scaling_factor_rotation * rotation
# else:
# return (
# self.__scaling_factor_rotation * rotation[0],
# self.__scaling_factor_rotation * rotation[1],
# self.__scaling_factor_rotation * rotation[2],
# )
#
# def restrict_position_goal_to_workspace(
# self, position: Tuple[float, float, float]
# ) -> Tuple[float, float, float]:
# return (
# min(
# self.workspace_max_bound[0],
# max(
# self.workspace_min_bound[0],
# position[0],
# ),
# ),
# min(
# self.workspace_max_bound[1],
# max(
# self.workspace_min_bound[1],
# position[1],
# ),
# ),
# min(
# self.workspace_max_bound[2],
# max(
# self.workspace_min_bound[2],
# position[2],
# ),
# ),
# )
# def restrict_servo_translation_to_workspace(
# self, translation: tuple[float, float, float]
# ) -> tuple[float, float, float]:
# current_ee_position = self.get_ee_position()
#
# translation = tuple(
# 0.0
# if (
# current_ee_position[i] > self.workspace_max_bound[i]
# and translation[i] > 0.0
# )
# or (
# current_ee_position[i] < self.workspace_min_bound[i]
# and translation[i] < 0.0
# )
# else translation[i]
# for i in range(3)
# )
#
# return translation
def get_ee_pose(
self,
) -> Pose:
"""
Return the current pose of the end effector with respect to arm base link.
"""
try:
robot_model = self.world.to_gazebo().get_model(self.robot.name).to_gazebo()
ee_position, ee_quat_xyzw = get_model_pose(
world=self.world,
model=robot_model,
link=self.robot.ee_link,
xyzw=True,
)
return transform_change_reference_frame_pose(
world=self.world,
position=ee_position,
quat=ee_quat_xyzw,
target_model=robot_model,
target_link=self.robot.base_link,
xyzw=True,
)
except Exception as e:
self.get_logger().warn(
f"Cannot get end effector pose from Gazebo ({e}), using tf2..."
)
transform = self.tf2_listener.lookup_transform_sync(
source_frame=self.robot.ee_link,
target_frame=self.robot.base_link,
retry=False,
)
if transform is not None:
return (
(
transform.translation.x,
transform.translation.y,
transform.translation.z,
),
(
transform.rotation.x,
transform.rotation.y,
transform.rotation.z,
transform.rotation.w,
),
)
else:
self.get_logger().error(
"Cannot get pose of the end effector (default values are returned)"
)
return (
(0.0, 0.0, 0.0),
(0.0, 0.0, 0.0, 1.0),
)
def get_ee_position(self) -> Point:
"""
Return the current position of the end effector with respect to arm base link.
"""
try:
robot_model = self.robot_wrapper
ee_position = get_model_position(
world=self.world,
model=robot_model,
link=self.robot.ee_link,
)
return transform_change_reference_frame_position(
world=self.world,
position=ee_position,
target_model=robot_model,
target_link=self.robot.base_link,
)
except Exception as e:
self.get_logger().debug(
f"Cannot get end effector position from Gazebo ({e}), using tf2..."
)
transform = self.tf2_listener.lookup_transform_sync(
source_frame=self.robot.ee_link,
target_frame=self.robot.base_link,
retry=False,
)
if transform is not None:
return (
transform.translation.x,
transform.translation.y,
transform.translation.z,
)
else:
self.get_logger().error(
"Cannot get position of the end effector (default values are returned)"
)
return (0.0, 0.0, 0.0)
def get_ee_orientation(self) -> Quat:
"""
Return the current xyzw quaternion of the end effector with respect to arm base link.
"""
try:
robot_model = self.robot_wrapper
ee_quat_xyzw = get_model_orientation(
world=self.world,
model=robot_model,
link=self.robot.ee_link,
xyzw=True,
)
return transform_change_reference_frame_orientation(
world=self.world,
quat=ee_quat_xyzw,
target_model=robot_model,
target_link=self.robot.base_link,
xyzw=True,
)
except Exception as e:
self.get_logger().warn(
f"Cannot get end effector orientation from Gazebo ({e}), using tf2..."
)
transform = self.tf2_listener.lookup_transform_sync(
source_frame=self.robot.ee_link,
target_frame=self.robot.base_link,
retry=False,
)
if transform is not None:
return (
transform.rotation.x,
transform.rotation.y,
transform.rotation.z,
transform.rotation.w,
)
else:
self.get_logger().error(
"Cannot get orientation of the end effector (default values are returned)"
)
return (0.0, 0.0, 0.0, 1.0)
def get_object_position(
self, object_model: ModelWrapper | str
) -> Point:
"""
Return the current position of an object with respect to arm base link.
Note: Only simulated objects are currently supported.
"""
try:
object_position = get_model_position(
world=self.world,
model=object_model,
)
return transform_change_reference_frame_position(
world=self.world,
position=object_position,
target_model=self.robot_wrapper.name(),
target_link=self.robot.base_link,
)
except Exception as e:
self.get_logger().error(
f"Cannot get position of {object_model} object (default values are returned): {e}"
)
return (0.0, 0.0, 0.0)
def get_object_positions(self) -> dict[str, Point]:
"""
Return the current position of all objects with respect to arm base link.
Note: Only simulated objects are currently supported.
"""
object_positions = {}
try:
robot_model = self.robot_wrapper
robot_arm_base_link = robot_model.get_link(link_name=self.robot.base_link)
for object_name in self.scene.__objects_unique_names:
object_position = get_model_position(
world=self.world,
model=object_name,
)
object_positions[object_name] = (
transform_change_reference_frame_position(
world=self.world,
position=object_position,
target_model=robot_model,
target_link=robot_arm_base_link,
)
)
except Exception as e:
self.get_logger().error(
f"Cannot get positions of all objects (empty Dict is returned): {e}"
)
return object_positions
def substitute_special_frame(self, frame_id: str) -> str:
if "arm_base_link" == frame_id:
return self.robot.base_link
elif "base_link" == frame_id:
return self.robot.base_link
elif "end_effector" == frame_id:
return self.robot.ee_link
elif "world" == frame_id:
try:
# In Gazebo, where multiple worlds are allowed
return self.world.to_gazebo().name()
except Exception as e:
self.get_logger().warn(f"{e}")
# Otherwise (e.g. real world)
return "rbs_gym_world"
else:
return frame_id
def wait_until_action_executed(self):
if self._enable_gripper:
#FIXME: code is not tested
self.gripper.wait_until_executed()
def move_to_initial_joint_configuration(self):
#TODO: Write this code for cartesian_control
pass
# self.moveit2.move_to_configuration(self.initial_arm_joint_positions)
# if (
# self.robot_model_class.CLOSED_GRIPPER_JOINT_POSITIONS
# == self.initial_gripper_joint_positions
# ):
# self.gripper.reset_close()
# else:
# self.gripper.reset_open()
def check_terrain_collision(self) -> bool:
"""
Returns true if robot links are in collision with the ground.
"""
robot_name_len = len(self.robot.name)
model_names = self.world.model_names()
terrain_model = self.world.get_model(self.scene.terrain.name)
for contact in terrain_model.contacts():
body_b = contact.body_b
if body_b.startswith(self.robot.name) and len(body_b) > robot_name_len:
link = body_b[robot_name_len + 2 :]
if link != self.robot.base_link and (
link in self.robot.actuated_joint_names
or link in self.robot.gripper_actuated_joint_names
):
return True
return False
# def check_all_objects_outside_workspace(
# self,
# object_positions: Dict[str, Tuple[float, float, float]],
# ) -> bool:
# """
# Returns true if all objects are outside the workspace
# """
#
# return all(
# [
# self.check_object_outside_workspace(object_position)
# for object_position in object_positions.values()
# ]
# )
#
# def check_object_outside_workspace(
# self,
# object_position: Tuple[float, float, float],
# ) -> bool:
# """
# Returns true if the object is outside the workspace
# """
#
# return (
# object_position[0] < self.workspace_min_bound[0]
# or object_position[1] < self.workspace_min_bound[1]
# or object_position[2] < self.workspace_min_bound[2]
# or object_position[0] > self.workspace_max_bound[0]
# or object_position[1] > self.workspace_max_bound[1]
# or object_position[2] > self.workspace_max_bound[2]
# )
# def add_parameter_overrides(self, parameter_overrides: Dict[str, Any]):
# self.add_task_parameter_overrides(parameter_overrides)
# self.add_randomizer_parameter_overrides(parameter_overrides)
# #
# def add_task_parameter_overrides(self, parameter_overrides: Dict[str, Any]):
# self.__task_parameter_overrides.update(parameter_overrides)
# #
# def add_randomizer_parameter_overrides(self, parameter_overrides: Dict[str, Any]):
# self._randomizer_parameter_overrides.update(parameter_overrides)
#
# def __consume_parameter_overrides(self):
# for key, value in self.__task_parameter_overrides.items():
# if hasattr(self, key):
# setattr(self, key, value)
# elif hasattr(self, f"_{key}"):
# setattr(self, f"_{key}", value)
# elif hasattr(self, f"__{key}"):
# setattr(self, f"__{key}", value)
# else:
# self.get_logger().error(
# f"Override '{key}' is not supperted by the task."
# )
#
# self.__task_parameter_overrides.clear()
@property
def robot(self) -> Robot:
"""The robot property."""
if self._scene:
return self._scene.robot_wrapper.robot
else:
raise ValueError("R")
@property
def robot_data(self) -> RobotData:
"""The robot_data property."""
if self._scene:
return self._scene.robot
else:
raise ValueError("RD")
@property
def robot_wrapper(self) -> RobotWrapper:
"""The robot_wrapper property."""
if self._scene:
return self._scene.robot_wrapper
else:
raise ValueError("RW")
@property
def scene(self) -> Scene:
"""The scene property."""
if self._scene:
return self._scene
else:
raise ValueError("S")
@scene.setter
def scene(self, value: Scene):
self._scene = value

View file

@ -1,4 +0,0 @@
from .reach import Reach
from .reach_color_image import ReachColorImage
from .reach_depth_image import ReachDepthImage
# from .reach_octree import ReachOctree

View file

@ -1,120 +0,0 @@
import abc
import gymnasium as gym
import numpy as np
from gym_gz.utils.typing import Observation, ObservationSpace
from rbs_gym.envs.models.sensors import Camera
from rbs_gym.envs.observation import CameraSubscriber
from rbs_gym.envs.tasks.reach import Reach
class ReachColorImage(Reach, abc.ABC):
def __init__(
self,
camera_info: list[dict] = [],
monochromatic: bool = False,
**kwargs,
):
# Initialize the Task base class
Reach.__init__(
self,
**kwargs,
)
# Store parameters for later use
self._camera_width = sum(w.get("width", 0) for w in camera_info)
self._camera_height = camera_info[0]["height"]
self._monochromatic = monochromatic
self._saved = False
self.camera_subs: list[CameraSubscriber] = []
for camera in camera_info:
# Perception (RGB camera)
self.camera_subs.append(
CameraSubscriber(
node=self,
topic=Camera.get_color_topic(camera["name"], camera["type"]),
is_point_cloud=False,
callback_group=self._callback_group,
)
)
def create_observation_space(self) -> ObservationSpace:
"""
Creates the observation space for the Imitation Learning algorithm.
This method returns a dictionary-based observation space that includes:
- **image**: A 2D image captured from the robot's camera. The image's shape is determined by the camera's
width and height, and the number of channels (either monochromatic or RGB).
- Dimensions: `(camera_height, camera_width, channels)` where `channels` is 1 for monochromatic images
and 3 for RGB images.
- Pixel values are in the range `[0, 255]` with data type `np.uint8`.
- **joint_states**: The joint positions of the robot arm, represented as continuous values within the
range `[-1.0, 1.0]`, where `nDoF` refers to the number of degrees of freedom of the robot arm.
Returns:
gym.spaces.Dict: A dictionary defining the observation space for the learning algorithm,
containing both image and joint_states information.
"""
return gym.spaces.Dict(
{
"image": gym.spaces.Box(
low=0,
high=255,
shape=(
self._camera_height,
self._camera_width,
1 if self._monochromatic else 3,
),
dtype=np.uint8,
),
"joint_states": gym.spaces.Box(low=-1.0, high=1.0),
}
)
def get_observation(self) -> Observation:
# Get the latest images
image = []
for sub in self.camera_subs:
image.append(sub.get_observation())
image_width = sum(i.width for i in image)
image_height = image[0].height
# if image_width == self._camera_width and image_height == self._camera_height:
# image_data = np.concatenate([i.data for i in image], axis=1)
assert (
image_width == self._camera_width and image_height == self._camera_height
), f"Error: Resolution of the input image does not match the configured observation space. ({image_width}x{image_height} instead of {self._camera_width}x{self._camera_height})"
# Reshape and create the observation
# color_image = np.array([i.data for i in image], dtype=np.uint8).reshape(
# self._camera_height, self._camera_width, 3
# )
color_image = np.concatenate(
[
np.array(i.data, dtype=np.uint8).reshape(i.height, i.width, 3)
for i in image
],
axis=1,
)
# # Debug save images
# from PIL import Image
# img_color = Image.fromarray(color_image)
# img_color.save("img_color.png")
if self._monochromatic:
observation = Observation(color_image[:, :, 0])
else:
observation = Observation(color_image)
self.get_logger().debug(f"\nobservation: {observation}")
# Return the observation
return observation

View file

@ -1,199 +0,0 @@
import abc
from typing import Tuple
from geometry_msgs.msg import WrenchStamped
import gymnasium as gym
import numpy as np
from gym_gz.utils.typing import (
Action,
ActionSpace,
Observation,
ObservationSpace,
Reward,
)
from rbs_gym.envs.tasks.manipulation import Manipulation
from env_manager.utils.math import distance_to_nearest_point
from std_msgs.msg import Float64MultiArray
from rbs_gym.envs.observation import TwistSubscriber, JointStates
from env_manager.utils.helper import get_object_position
class Reach(Manipulation, abc.ABC):
def __init__(
self,
sparse_reward: bool,
collision_reward: float,
act_quick_reward: float,
required_accuracy: float,
**kwargs,
):
# Initialize the Task base class
Manipulation.__init__(
self,
**kwargs,
)
# Additional parameters
self._sparse_reward: bool = sparse_reward
self._act_quick_reward = (
act_quick_reward if act_quick_reward >= 0.0 else -act_quick_reward
)
self._collision_reward = (
collision_reward if collision_reward >= 0.0 else -collision_reward
)
self._required_accuracy: float = required_accuracy
# Flag indicating if the task is done (performance - get_reward + is_done)
self._is_done: bool = False
self._is_truncated: bool = False
self._is_terminated: bool = False
# Distance to target in the previous step (or after reset)
self._previous_distance: float = 0.0
# self._collision_reward: float = collision_reward
# self.initial_gripper_joint_positions = self.robot_data.gripper_joint_positions
self.twist = TwistSubscriber(
self,
topic="/cartesian_force_controller/current_twist",
callback_group=self._callback_group,
)
self.joint_states = JointStates(
self, topic="/joint_states", callback_group=self._callback_group
)
self._action = WrenchStamped()
self._action_array: Action = np.array([])
self.followed_object_name = "sphere"
def create_action_space(self) -> ActionSpace:
# 0:3 - (x, y, z) force
# 3:6 - (x, y, z) torque
return gym.spaces.Box(low=-1.0, high=1.0, shape=(3,), dtype=np.float32)
def create_observation_space(self) -> ObservationSpace:
# 0:3 - (x, y, z) end effector position
# 3:6 - (x, y, z) target position
# 6:9 - (x, y, z) current twist
# Note: These could theoretically be restricted to the workspace and object spawn area instead of inf
return gym.spaces.Box(low=-np.inf, high=np.inf, shape=(12,), dtype=np.float32)
def set_action(self, action: Action):
# self.get_logger().debug(f"action: {action}")
# act = Float64MultiArray()
# act.data = action
# self.joint_controller.publisher.publish(act)
if isinstance(action, np.number):
raise RuntimeError("For the task Reach the action space should be array")
# Store action for reward function
self._action_array = action
# self._action.header.frame_id = self.robot_ee_link_name
self._action.header.stamp = self.get_clock().now().to_msg()
self._action.header.frame_id = self.robot.ee_link
self._action.wrench.force.x = float(action[0]) * 30.0
self._action.wrench.force.y = float(action[1]) * 30.0
self._action.wrench.force.z = float(action[2]) * 30.0
# self._action.wrench.torque.x = float(action[3]) * 10.0
# self._action.wrench.torque.y = float(action[4]) * 10.0
# self._action.wrench.torque.z = float(action[5]) * 10.0
self.controller.publisher.publish(self._action)
def get_observation(self) -> Observation:
# Get current end-effector and target positions
self.object_names = []
ee_position = self.get_ee_position()
target_position = self.get_object_position(self.followed_object_name)
# target_position = self.get_object_position(object_model=self.object_names[0])
# joint_states = tuple(self.joint_states.get_positions())
# self.get_logger().warn(f"joint_states: {joint_states[:7]}")
twist = self.twist.get_observation()
twt: tuple[float, float, float, float, float, float] = (
twist.twist.linear.x,
twist.twist.linear.y,
twist.twist.linear.z,
twist.twist.angular.x,
twist.twist.angular.y,
twist.twist.angular.z,
)
observation: Observation = np.concatenate([ee_position, target_position, twt], dtype=np.float32)
# Create the observation
# observation = Observation(
# np.concatenate([ee_position, target_position, twt], dtype=np.float32)
# )
self.get_logger().debug(f"\nobservation: {observation}")
# Return the observation
return observation
def get_reward(self) -> Reward:
reward = 0.0
# Compute the current distance to the target
current_distance = self.get_distance_to_target()
# Mark the episode done if target is reached
if current_distance < self._required_accuracy:
self._is_terminated = True
reward += 1.0 if self._sparse_reward else 0.0 # 100.0
self.get_logger().debug(f"reward_target: {reward}")
# Give reward based on how much closer robot got relative to the target for dense reward
if not self._sparse_reward:
distance_delta = self._previous_distance - current_distance
reward += distance_delta * 10.0
self._previous_distance = current_distance
self.get_logger().debug(f"reward_distance: {reward}")
if self.check_terrain_collision():
reward -= self._collision_reward
self._is_truncated = True
self.get_logger().debug(f"reward_collision: {reward}")
# Reward control
# reward -= np.abs(self._action_array).sum() * 0.01
# self.get_logger().debug(f"reward_c: {reward}")
# Subtract a small reward each step to provide incentive to act quickly (if enabled)
reward += self._act_quick_reward
self.get_logger().debug(f"reward: {reward}")
return Reward(reward)
def is_terminated(self) -> bool:
self.get_logger().debug(f"terminated: {self._is_terminated}")
return self._is_terminated
def is_truncated(self) -> bool:
self.get_logger().debug(f"truncated: {self._is_truncated}")
return self._is_truncated
def reset_task(self):
Manipulation.reset_task(self)
self._is_done = False
self._is_truncated = False
self._is_terminated = False
# Compute and store the distance after reset if using dense reward
if not self._sparse_reward:
self._previous_distance = self.get_distance_to_target()
self.get_logger().debug("\ntask reset")
def get_distance_to_target(self) -> float:
ee_position = self.get_ee_position()
object_position = self.get_object_position(object_model=self.followed_object_name)
self.tf2_broadcaster.broadcast_tf(
"world", "object", object_position, (0.0, 0.0, 0.0, 1.0), xyzw=True
)
return distance_to_nearest_point(origin=ee_position, points=[object_position])

View file

@ -1,89 +0,0 @@
import abc
import gymnasium as gym
import numpy as np
from gym_gz.utils.typing import Observation, ObservationSpace
from env_manager.models.sensors import Camera
from rbs_gym.envs.observation import CameraSubscriber
from rbs_gym.envs.tasks.reach import Reach
class ReachColorImage(Reach, abc.ABC):
def __init__(
self,
camera_info: list[dict] = [],
monochromatic: bool = False,
**kwargs,
):
# Initialize the Task base class
Reach.__init__(
self,
**kwargs,
)
# Store parameters for later use
self._camera_width = sum(w.get('width', 0) for w in camera_info)
self._camera_height = camera_info[0]["height"]
self._monochromatic = monochromatic
self._saved = False
self.camera_subs: list[CameraSubscriber] = []
for camera in camera_info:
# Perception (RGB camera)
self.camera_subs.append(CameraSubscriber(
node=self,
topic=Camera.get_color_topic(camera["name"], camera["type"]),
is_point_cloud=False,
callback_group=self._callback_group,
))
def create_observation_space(self) -> ObservationSpace:
# 0:3*height*width - rgb image
# 0:1*height*width - monochromatic (intensity) image
return gym.spaces.Box(
low=0,
high=255,
shape=(
self._camera_height,
self._camera_width,
1 if self._monochromatic else 3,
),
dtype=np.uint8,
)
def get_observation(self) -> Observation:
# Get the latest images
image = []
for sub in self.camera_subs:
image.append(sub.get_observation())
image_width = sum(i.width for i in image)
image_height = image[0].height
assert (
image_width == self._camera_width and image_height == self._camera_height
), f"Error: Resolution of the input image does not match the configured observation space. ({image_width}x{image_height} instead of {self._camera_width}x{self._camera_height})"
color_image = np.concatenate(
[np.array(i.data, dtype=np.uint8).reshape(i.height, i.width, 3) for i in image],
axis=1
)
# # Debug save images
# from PIL import Image
# img_color = Image.fromarray(color_image)
# img_color.save("img_color.png")
if self._monochromatic:
observation = Observation(color_image[:, :, 0])
else:
observation = Observation(color_image)
self.get_logger().debug(f"\nobservation: {observation}")
# Return the observation
return observation

View file

@ -1,67 +0,0 @@
import abc
import gymnasium as gym
import numpy as np
from gym_gz.utils.typing import Observation, ObservationSpace
from env_manager.models.sensors import Camera
from rbs_gym.envs.observation import CameraSubscriber
from rbs_gym.envs.tasks.reach import Reach
class ReachDepthImage(Reach, abc.ABC):
def __init__(
self,
camera_width: int,
camera_height: int,
camera_type: str = "depth_camera",
**kwargs,
):
# Initialize the Task base class
Reach.__init__(
self,
**kwargs,
)
# Store parameters for later use
self._camera_width = camera_width
self._camera_height = camera_height
# Perception (depth camera)
self.camera_sub = CameraSubscriber(
node=self,
topic=Camera.get_depth_topic(camera_type),
is_point_cloud=False,
callback_group=self._callback_group,
)
def create_observation_space(self) -> ObservationSpace:
# 0:height*width - depth image
return gym.spaces.Box(
low=0,
high=np.inf,
shape=(self._camera_height, self._camera_width, 1),
dtype=np.float32,
)
def get_observation(self) -> Observation:
# Get the latest image
image = self.camera_sub.get_observation()
# Construct from buffer and reshape
depth_image = np.frombuffer(image.data, dtype=np.float32).reshape(
self._camera_height, self._camera_width, 1
)
# Replace all instances of infinity with 0
depth_image[depth_image == np.inf] = 0.0
# Create the observation
observation = Observation(depth_image)
self.get_logger().debug(f"\nobservation: {observation}")
# Return the observation
return observation

View file

@ -1,295 +0,0 @@
#!/usr/bin/env -S python3 -O
import argparse
import os
from typing import Dict
import numpy as np
import torch as th
import yaml
from rbs_gym import envs as gz_envs
from rbs_gym.utils import create_test_env, get_latest_run_id, get_saved_hyperparams
from rbs_gym.utils.utils import ALGOS, StoreDict, str2bool
from stable_baselines3.common.utils import set_random_seed
from stable_baselines3.common.vec_env import DummyVecEnv, VecEnv, VecEnvWrapper
def find_model_path(log_path: str, args) -> str:
model_extensions = ["zip"]
for ext in model_extensions:
model_path = os.path.join(log_path, f"{args.env}.{ext}")
if os.path.isfile(model_path):
return model_path
if args.load_best:
best_model_path = os.path.join(log_path, "best_model.zip")
if os.path.isfile(best_model_path):
return best_model_path
if args.load_checkpoint is not None:
checkpoint_model_path = os.path.join(
log_path, f"rl_model_{args.load_checkpoint}_steps.zip"
)
if os.path.isfile(checkpoint_model_path):
return checkpoint_model_path
raise ValueError(f"No model found for {args.algo} on {args.env}, path: {log_path}")
def main():
parser = argparse.ArgumentParser()
# Environment and its parameters
parser.add_argument(
"--env", type=str, default="Reach-Gazebo-v0", help="Environment ID"
)
parser.add_argument(
"--env-kwargs",
type=str,
nargs="+",
action=StoreDict,
help="Optional keyword argument to pass to the env constructor",
)
parser.add_argument("--n-envs", type=int, default=1, help="Number of environments")
# Algorithm
parser.add_argument(
"--algo",
type=str,
choices=list(ALGOS.keys()),
required=False,
default="sac",
help="RL algorithm to use during the training",
)
parser.add_argument(
"--num-threads",
type=int,
default=-1,
help="Number of threads for PyTorch (-1 to use default)",
)
# Test duration
parser.add_argument(
"-n",
"--n-episodes",
type=int,
default=200,
help="Number of evaluation episodes",
)
# Random seed
parser.add_argument("--seed", type=int, default=0, help="Random generator seed")
# Model to test
parser.add_argument(
"-f", "--log-folder", type=str, default="logs", help="Path to the log directory"
)
parser.add_argument(
"--exp-id",
type=int,
default=0,
help="Experiment ID (default: 0: latest, -1: no exp folder)",
)
parser.add_argument(
"--load-best",
type=str2bool,
default=False,
help="Load best model instead of last model if available",
)
parser.add_argument(
"--load-checkpoint",
type=int,
help="Load checkpoint instead of last model if available, you must pass the number of timesteps corresponding to it",
)
# Deterministic/stochastic actions
parser.add_argument(
"--stochastic",
type=str2bool,
default=False,
help="Use stochastic actions instead of deterministic",
)
# Logging
parser.add_argument(
"--reward-log", type=str, default="reward_logs", help="Where to log reward"
)
parser.add_argument(
"--norm-reward",
type=str2bool,
default=False,
help="Normalize reward if applicable (trained with VecNormalize)",
)
# Disable render
parser.add_argument(
"--no-render",
type=str2bool,
default=False,
help="Do not render the environment (useful for tests)",
)
# Verbosity
parser.add_argument(
"--verbose", type=int, default=1, help="Verbose mode (0: no output, 1: INFO)"
)
args, unknown = parser.parse_known_args()
if args.exp_id == 0:
args.exp_id = get_latest_run_id(
os.path.join(args.log_folder, args.algo), args.env
)
print(f"Loading latest experiment, id={args.exp_id}")
# Sanity checks
if args.exp_id > 0:
log_path = os.path.join(args.log_folder, args.algo, f"{args.env}_{args.exp_id}")
else:
log_path = os.path.join(args.log_folder, args.algo)
assert os.path.isdir(log_path), f"The {log_path} folder was not found"
model_path = find_model_path(log_path, args)
off_policy_algos = ["qrdqn", "dqn", "ddpg", "sac", "her", "td3", "tqc"]
if args.algo in off_policy_algos:
args.n_envs = 1
set_random_seed(args.seed)
if args.num_threads > 0:
if args.verbose > 1:
print(f"Setting torch.num_threads to {args.num_threads}")
th.set_num_threads(args.num_threads)
stats_path = os.path.join(log_path, args.env)
hyperparams, stats_path = get_saved_hyperparams(
stats_path, norm_reward=args.norm_reward, test_mode=True
)
# load env_kwargs if existing
env_kwargs = {}
args_path = os.path.join(log_path, args.env, "args.yml")
if os.path.isfile(args_path):
with open(args_path, "r") as f:
loaded_args = yaml.load(f, Loader=yaml.UnsafeLoader)
if loaded_args["env_kwargs"] is not None:
env_kwargs = loaded_args["env_kwargs"]
# overwrite with command line arguments
if args.env_kwargs is not None:
env_kwargs.update(args.env_kwargs)
log_dir = args.reward_log if args.reward_log != "" else None
env = create_test_env(
args.env,
n_envs=args.n_envs,
stats_path=stats_path,
seed=args.seed,
log_dir=log_dir,
should_render=not args.no_render,
hyperparams=hyperparams,
env_kwargs=env_kwargs,
)
kwargs = dict(seed=args.seed)
if args.algo in off_policy_algos:
# Dummy buffer size as we don't need memory to evaluate the trained agent
kwargs.update(dict(buffer_size=1))
custom_objects = {
"observation_space": env.observation_space,
"action_space": env.action_space,
}
model = ALGOS[args.algo].load(
model_path, env=env, custom_objects=custom_objects, **kwargs
)
obs = env.reset()
# Deterministic by default
stochastic = args.stochastic
deterministic = not stochastic
print(
f"Evaluating for {args.n_episodes} episodes with a",
"deterministic" if deterministic else "stochastic",
"policy.",
)
state = None
episode_reward = 0.0
episode_rewards, episode_lengths, success_episode_lengths = [], [], []
ep_len = 0
episode = 0
# For HER, monitor success rate
successes = []
while episode < args.n_episodes:
action, state = model.predict(obs, state=state, deterministic=deterministic)
obs, reward, done, infos = env.step(action)
if not args.no_render:
env.render("human")
episode_reward += reward[0]
ep_len += 1
if done and args.verbose > 0:
episode += 1
print(f"--- Episode {episode}/{args.n_episodes}")
# NOTE: for env using VecNormalize, the mean reward
# is a normalized reward when `--norm_reward` flag is passed
print(f"Episode Reward: {episode_reward:.2f}")
episode_rewards.append(episode_reward)
print("Episode Length", ep_len)
episode_lengths.append(ep_len)
if infos[0].get("is_success") is not None:
print("Success?:", infos[0].get("is_success", False))
successes.append(infos[0].get("is_success", False))
if infos[0].get("is_success"):
success_episode_lengths.append(ep_len)
print(f"Current success rate: {100 * np.mean(successes):.2f}%")
episode_reward = 0.0
ep_len = 0
state = None
if args.verbose > 0 and len(successes) > 0:
print(f"Success rate: {100 * np.mean(successes):.2f}%")
if args.verbose > 0 and len(episode_rewards) > 0:
print(
f"Mean reward: {np.mean(episode_rewards):.2f} "
f"+/- {np.std(episode_rewards):.2f}"
)
if args.verbose > 0 and len(episode_lengths) > 0:
print(
f"Mean episode length: {np.mean(episode_lengths):.2f} "
f"+/- {np.std(episode_lengths):.2f}"
)
if args.verbose > 0 and len(success_episode_lengths) > 0:
print(
f"Mean episode length of successful episodes: {np.mean(success_episode_lengths):.2f} "
f"+/- {np.std(success_episode_lengths):.2f}"
)
# Workaround for https://github.com/openai/gym/issues/893
if not args.no_render:
if args.n_envs == 1 and "Bullet" not in args.env and isinstance(env, VecEnv):
# DummyVecEnv
# Unwrap env
while isinstance(env, VecEnvWrapper):
env = env.venv
if isinstance(env, DummyVecEnv):
env.envs[0].env.close()
else:
env.close()
else:
# SubprocVecEnv
env.close()
if __name__ == "__main__":
main()

View file

@ -1,234 +0,0 @@
#!/usr/bin/env -S python3 -O
import argparse
import difflib
import os
import uuid
from typing import Dict
import gymnasium as gym
import numpy as np
import torch as th
from rbs_gym import envs as gz_envs
from rbs_gym.utils.exp_manager import ExperimentManager
from rbs_gym.utils.utils import ALGOS, StoreDict, empty_str2none, str2bool
from stable_baselines3.common.utils import set_random_seed
def main():
parser = argparse.ArgumentParser()
# Environment and its parameters
parser.add_argument(
"--env", type=str, default="Reach-Gazebo-v0", help="Environment ID"
)
parser.add_argument(
"--env-kwargs",
type=str,
nargs="+",
action=StoreDict,
help="Optional keyword argument to pass to the env constructor",
)
parser.add_argument(
"--vec-env",
type=str,
choices=["dummy", "subproc"],
default="dummy",
help="Type of VecEnv to use",
)
# Algorithm and training
parser.add_argument(
"--algo",
type=str,
choices=list(ALGOS.keys()),
required=False,
default="sac",
help="RL algorithm to use during the training",
)
parser.add_argument(
"-params",
"--hyperparams",
type=str,
nargs="+",
action=StoreDict,
help="Optional RL hyperparameter overwrite (e.g. learning_rate:0.01 train_freq:10)",
)
parser.add_argument(
"-n",
"--n-timesteps",
type=int,
default=-1,
help="Overwrite the number of timesteps",
)
parser.add_argument(
"--num-threads",
type=int,
default=-1,
help="Number of threads for PyTorch (-1 to use default)",
)
# Continue training an already trained agent
parser.add_argument(
"-i",
"--trained-agent",
type=str,
default="",
help="Path to a pretrained agent to continue training",
)
# Random seed
parser.add_argument("--seed", type=int, default=-1, help="Random generator seed")
# Saving of model
parser.add_argument(
"--save-freq",
type=int,
default=10000,
help="Save the model every n steps (if negative, no checkpoint)",
)
parser.add_argument(
"--save-replay-buffer",
type=str2bool,
default=False,
help="Save the replay buffer too (when applicable)",
)
# Pre-load a replay buffer and start training on it
parser.add_argument(
"--preload-replay-buffer",
type=empty_str2none,
default="",
help="Path to a replay buffer that should be preloaded before starting the training process",
)
# Logging
parser.add_argument(
"-f", "--log-folder", type=str, default="logs", help="Path to the log directory"
)
parser.add_argument(
"-tb",
"--tensorboard-log",
type=empty_str2none,
default="tensorboard_logs",
help="Tensorboard log dir",
)
parser.add_argument(
"--log-interval",
type=int,
default=-1,
help="Override log interval (default: -1, no change)",
)
parser.add_argument(
"-uuid",
"--uuid",
type=str2bool,
default=False,
help="Ensure that the run has a unique ID",
)
# Evaluation
parser.add_argument(
"--eval-freq",
type=int,
default=-1,
help="Evaluate the agent every n steps (if negative, no evaluation)",
)
parser.add_argument(
"--eval-episodes",
type=int,
default=5,
help="Number of episodes to use for evaluation",
)
# Verbosity
parser.add_argument(
"--verbose", type=int, default=1, help="Verbose mode (0: no output, 1: INFO)"
)
# HER specifics
parser.add_argument(
"--truncate-last-trajectory",
type=str2bool,
default=True,
help="When using HER with online sampling the last trajectory in the replay buffer will be truncated after reloading the replay buffer.",
)
args, unknown = parser.parse_known_args()
# Check if the selected environment is valid
# If it could not be found, suggest the closest match
registered_envs = set(gym.envs.registry.keys())
if args.env not in registered_envs:
try:
closest_match = difflib.get_close_matches(args.env, registered_envs, n=1)[0]
except IndexError:
closest_match = "'no close match found...'"
raise ValueError(
f"{args.env} not found in gym registry, you maybe meant {closest_match}?"
)
# If no specific seed is selected, choose a random one
if args.seed < 0:
args.seed = np.random.randint(2**32 - 1, dtype=np.int64).item()
# Set the random seed across platforms
set_random_seed(args.seed)
# Setting num threads to 1 makes things run faster on cpu
if args.num_threads > 0:
if args.verbose > 1:
print(f"Setting torch.num_threads to {args.num_threads}")
th.set_num_threads(args.num_threads)
# Verify that pre-trained agent exists before continuing to train it
if args.trained_agent != "":
assert args.trained_agent.endswith(".zip") and os.path.isfile(
args.trained_agent
), "The trained_agent must be a valid path to a .zip file"
# If enabled, ensure that the run has a unique ID
uuid_str = f"_{uuid.uuid4()}" if args.uuid else ""
print("=" * 10, args.env, "=" * 10)
print(f"Seed: {args.seed}")
env_kwargs = {"render_mode": "human"}
exp_manager = ExperimentManager(
args,
args.algo,
args.env,
args.log_folder,
args.tensorboard_log,
args.n_timesteps,
args.eval_freq,
args.eval_episodes,
args.save_freq,
args.hyperparams,
args.env_kwargs,
args.trained_agent,
truncate_last_trajectory=args.truncate_last_trajectory,
uuid_str=uuid_str,
seed=args.seed,
log_interval=args.log_interval,
save_replay_buffer=args.save_replay_buffer,
preload_replay_buffer=args.preload_replay_buffer,
verbose=args.verbose,
vec_env_type=args.vec_env,
)
# Prepare experiment
results = exp_manager.setup_experiment()
if results is not None:
model, saved_hyperparams = results
# Normal training
if model is not None:
exp_manager.learn(model)
exp_manager.save_trained_model(model)
else:
exp_manager.hyperparameters_optimization()
if __name__ == "__main__":
main()

View file

@ -1,100 +0,0 @@
#!/usr/bin/env python3
import argparse
from typing import Dict
import gymnasium as gym
from stable_baselines3.common.env_checker import check_env
from rbs_gym import envs as gz_envs
from rbs_gym.utils.utils import StoreDict, str2bool
def main():
parser = argparse.ArgumentParser()
# Environment and its parameters
parser.add_argument(
"--env", type=str, default="Reach-Gazebo-v0", help="Environment ID"
)
parser.add_argument(
"--env-kwargs",
type=str,
nargs="+",
action=StoreDict,
help="Optional keyword argument to pass to the env constructor",
)
# Number of episodes to run
parser.add_argument(
"-n",
"--n-episodes",
type=int,
default=10000,
help="Overwrite the number of episodes",
)
# Random seed
parser.add_argument("--seed", type=int, default=69, help="Random generator seed")
# Flag to check environment
parser.add_argument(
"--check-env",
type=str2bool,
default=True,
help="Flag to check the environment before running the random agent",
)
# Flag to enable rendering
parser.add_argument(
"--render",
type=str2bool,
default=False,
help="Flag to enable rendering",
)
args, unknown = parser.parse_known_args()
# Create the environment
if args.env_kwargs is not None:
env = gym.make(args.env, **args.env_kwargs)
else:
env = gym.make(args.env)
# Initialize random seed
env.seed(args.seed)
# Check the environment
if args.check_env:
check_env(env, warn=True, skip_render_check=True)
# Step environment for bunch of episodes
for episode in range(args.n_episodes):
# Initialize returned values
done = False
total_reward = 0
# Reset the environment
observation = env.reset()
# Step through the current episode until it is done
while not done:
# Sample random action
action = env.action_space.sample()
# Step the environment with the random action
observation, reward, truncated, terminated, info = env.step(action)
done = truncated or terminated
# Accumulate the reward
total_reward += reward
print(f"Episode #{episode}\n\treward: {total_reward}")
# Cleanup once done
env.close()
if __name__ == "__main__":
main()

Some files were not shown because too many files have changed in this diff Show more