More content and editing config link

This commit is contained in:
Sharwin24 2025-01-08 17:13:07 -06:00
parent 535419b959
commit 353fe1862d
5 changed files with 97 additions and 17 deletions

View file

@ -1,4 +1,4 @@
baseURL: "https://sharwin24.github.io/toha/" baseURL: "https://sharwin24.github.io/toha"
languageCode: "en-us" languageCode: "en-us"
title: "Sharwin's Portfolio" title: "Sharwin's Portfolio"
@ -39,7 +39,6 @@ module:
- source: ../../node_modules/katex/dist/fonts - source: ../../node_modules/katex/dist/fonts
target: static/fonts target: static/fonts
markup: markup:
defaultMarkdownHandler: goldmark defaultMarkdownHandler: goldmark
goldmark: goldmark:
@ -114,7 +113,7 @@ params:
favicon: /images/sharwin_portrait.jpg favicon: /images/sharwin_portrait.jpg
# GitHub repo of your site # GitHub repo of your site
gitRepo: https://github.com/Sharwin24/toha gitRepo: https://github.com/Sharwin24/sharwin24.github.io
# Default branch of your Git repo # Default branch of your Git repo
gitBranch: main gitBranch: main

View file

@ -18,16 +18,13 @@ This project incorporates several robotics concepts to perform a pick and place
<!-- ![Robot](robot.png) --> <!-- ![Robot](robot.png) -->
<div align="center"> <div align="center">
<img src="robot.png" alt="Robot"> <img src="robot.png" alt="Robot" style="border-radius: 15px;">
</div> </div>
## Omnidirectional Mobile Base Kinematics (Mecanum Wheels) ## Omnidirectional Mobile Base Kinematics (Mecanum Wheels)
This robot uses mecanum wheels, which are omnidirectional wheels with 45-degree rollers that allow the robot to move in any direction without changing its orientation. When controlling the robot, we command the wheel velocities, which can be described by the following equation: This robot uses mecanum wheels, which are omnidirectional wheels with 45-degree rollers that allow the robot to move in any direction without changing its orientation. When controlling the robot, we command the wheel velocities, which can be described by the following equation:
$$ $$ u_i=\frac{1}{r_i}\overbrace{\begin{bmatrix}1 & \tan\gamma_i\end{bmatrix}}^{\text{driving direction}}\underbrace{\begin{bmatrix}\cos\beta_i & \sin\beta_i \\\ -\sin\beta_i & \cos\beta_i\end{bmatrix}}_{\text{linear velocity in wheel frame}} \overbrace{\begin{bmatrix}-y_i & 1 & 0 \\\ x_i & 0 & 1\end{bmatrix}}^{\text{linear velocity in body frame}} V_b $$
u_i=\frac{1}{r_i}\overbrace{\begin{bmatrix}1 & \tan\gamma_i\end{bmatrix}}^{\text{driving direction}}\underbrace{\begin{bmatrix}\cos\beta_i & \sin\beta_i \\\ -\sin\beta_i & \cos\beta_i\end{bmatrix}}_{\text{linear velocity in wheel frame}}
\overbrace{\begin{bmatrix}-y_i & 1 & 0 \\\ x_i & 0 & 1\end{bmatrix}}^{\text{linear velocity in body frame}} V_b
$$
where: where:
- \\(u_i\\) is the wheel velocity - \\(u_i\\) is the wheel velocity
@ -38,12 +35,89 @@ where:
- \\(V_b\\) is the robot's 3D body twist, composed of the linear and angular velocity: \\([v_x, v_y, \omega]\\) - \\(V_b\\) is the robot's 3D body twist, composed of the linear and angular velocity: \\([v_x, v_y, \omega]\\)
<!-- ![Mecanum Base](mecanum_base.png) --> <!-- ![Mecanum Base](mecanum_base.png) -->
<div align="center"> <div align="center" style="background-color: rgba(255, 255, 255, 0.3); border-radius: 15px;">
<img src="mecanum_base.png" alt="Mecanum Base"> <img src="mecanum_base.png" alt="Mecanum Base">
</div> </div>
## Trajectory Generation ## Trajectory Generation
There were two There were two components to the trajectory generation:
1. End-Effector Reference Trajectory: Generating a desired end-effector trajectory for the robot to follow.
2. Time allocation: Generating the time and number of simulation steps per trajectory segment based on the desired end-effector trajectory.
These were accomplished using the trajectory generation functions provided by the `modern_robotics` library after splitting the sequence into 8 separate segments.
```python
dt = 0.01 # [s]
total_time = 10 # [s]
traj_1_time = 0.21 * total_time # Initial -> Pick_Standoff
traj_2_time = 0.01 * total_time # Pick_Standoff -> Cube_Initial
traj_3_time = 0.07 * total_time # Cube_Initial -> Gripper Closed
traj_4_time = 0.21 * total_time # Gripper Closed -> Pick_Standoff
traj_5_time = 0.21 * total_time # Pick_Standoff -> Place_Standoff
traj_6_time = 0.01 * total_time # Place_Standoff -> Cube_Final
traj_7_time = 0.07 * total_time # Cube_Final -> Gripper Open
traj_8_time = 0.21 * total_time # Gripper Open -> Place_Standoff
trajectory_time = np.array([
traj_1_time, traj_2_time, traj_3_time, traj_4_time,
traj_5_time, traj_6_time, traj_7_time, traj_8_time
])
# steps per trajectory segment
# traj_steps = int(total_time * num_reference_configs / dt)
traj_steps = [
int(t * num_reference_configs / dt) for t in trajectory_time
]
# Apply transformations for the waypoints we'll need
pick_standoff_config = cube_initial_config @ standoff_config
place_standoff_config = cube_final_config @ standoff_config
pick_grasp_config = cube_initial_config @ ee_grasping_config
place_grasp_config = cube_final_config @ ee_grasping_config
# Gripper states for the trajectory
gripper_states = []
# Trajectory 1: Initial -> Pick_Standoff (Screw Trajectory)
gripper_states.extend([0] * traj_steps[0])
traj_1 = mr.ScrewTrajectory(
ee_initial_config, pick_standoff_config, trajectory_time[0], traj_steps[0], method=3
)
# Trajectory 2: Pick_Standoff -> Cube_Initial (Cartesian Trajectory)
gripper_states.extend([0] * traj_steps[1])
traj_2 = mr.CartesianTrajectory(
pick_standoff_config, pick_grasp_config, trajectory_time[1], traj_steps[1], method=3
)
# Trajectory 3: Cube_Initial -> Gripper Closed
gripper_states.extend([1] * traj_steps[2])
traj_3 = mr.ScrewTrajectory(
pick_grasp_config, pick_grasp_config, trajectory_time[2], traj_steps[2], method=3
)
# Trajectory 4: Gripper Closed -> Pick_Standoff (Cartesian Trajectory)
gripper_states.extend([1] * traj_steps[3])
traj_4 = mr.CartesianTrajectory(
pick_grasp_config, pick_standoff_config, trajectory_time[3], traj_steps[3], method=3
)
# Trajectory 5: Pick_Standoff -> Place_Standoff (Screw Trajectory)
gripper_states.extend([1] * traj_steps[4])
traj_5 = mr.ScrewTrajectory(
pick_standoff_config, place_standoff_config, trajectory_time[4], traj_steps[4], method=3
)
# Trajectory 6: Place_Standoff -> Cube_Final (Cartesian Trajectory)
gripper_states.extend([1] * traj_steps[5])
traj_6 = mr.CartesianTrajectory(
place_standoff_config, place_grasp_config, trajectory_time[5], traj_steps[5], method=3
)
# Trajectory 7: Cube_Final -> Gripper Open
gripper_states.extend([0] * traj_steps[6])
traj_7 = mr.ScrewTrajectory(
place_grasp_config, place_grasp_config, trajectory_time[6], traj_steps[6], method=3
)
# Trajectory 8: Gripper Open -> Place_Standoff
gripper_states.extend([0] * traj_steps[7])
traj_8 = mr.CartesianTrajectory(
place_grasp_config, place_standoff_config, trajectory_time[7], traj_steps[7], method=3
)
trajectory = np.concatenate(
(traj_1, traj_2, traj_3, traj_4, traj_5, traj_6, traj_7, traj_8), axis=0
)
```
## Odometry ## Odometry
Odometry is the process of estimating the mobile robot's pose by integrating the wheel velocities. The robot's pose is represented by the chassis configuration \\([\phi, x, y]\\), where \\(\phi\\) is the orientation and \\(x, y\\) are the position in the world frame. We can compute the body twist \\(V_b\\) using the wheel velocities \\(\dot{\theta}\\), the timestep \\(\Delta t\\), and the chassis configuration \\( F=pinv(H(0))\\): Odometry is the process of estimating the mobile robot's pose by integrating the wheel velocities. The robot's pose is represented by the chassis configuration \\([\phi, x, y]\\), where \\(\phi\\) is the orientation and \\(x, y\\) are the position in the world frame. We can compute the body twist \\(V_b\\) using the wheel velocities \\(\dot{\theta}\\), the timestep \\(\Delta t\\), and the chassis configuration \\( F=pinv(H(0))\\):

View file

@ -1,7 +1,7 @@
--- ---
title: "ToastBot" title: "ToastBot"
date: 2025-01-05T09:00:00+00:00 date: 2024-12-12T09:00:00+00:00
description: Introduction to Sample Post description: Pick and Place Sequence using the Franka Emika Panda Robot Arm
hero: images/toastbot.jpg hero: images/toastbot.jpg
author: author:
image: /images/sharwin_portrait.jpg image: /images/sharwin_portrait.jpg
@ -10,6 +10,13 @@ menu:
name: Toastbot name: Toastbot
identifier: ToastBot identifier: ToastBot
weight: 2 weight: 2
tags: ["Basic", "Multi-lingual"] tags: ["Python", "ROS", "Moveit API", "Intel Realsense"]
categories: ["Basic"] # categories: ["Basic"]
--- ---
Using a 7-DoF Franka Emika Panda Robot Arm, we developed a pick and place sequence using an Intel Realsense camera for identifying april tags.
<!-- Embed this youtube video: https://www.youtube.com/watch?v=XGcdhWRo-iU&t=1s -->
<div align="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/XGcdhWRo-iU" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>

2
go.mod
View file

@ -4,5 +4,5 @@ go 1.21
replace( replace(
github.com/hugo-toha/toha/v4 => /home/sharwin/web-dev/toha github.com/hugo-toha/toha/v4 => /home/sharwin/sharwin24.github.io/
) )

View file

@ -126,7 +126,7 @@
<!-------------- Newsletter ---------------> <!-------------- Newsletter --------------->
{{ if $newsletterEnabled }} {{ if $newsletterEnabled }}
{{ $provider := site.Params.footer.newsletter.provider }} {{ $provider := site.Params.footer.newsletter.provider }}
<!-- /home/sharwin/web-dev/toha-sharwin/data/en/site.yaml --> <!-- /home/sharwin/sharwin24.github.io/data/en/site.yaml -->
{{ end }} {{ end }}
</div> </div>
</div> </div>