Added Napkin project README and reordered posts
This commit is contained in:
parent
fc16c0345d
commit
bec55aeed1
14 changed files with 110 additions and 22 deletions
BIN
assets/images/robot_states.png
Normal file
BIN
assets/images/robot_states.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 49 KiB |
BIN
assets/images/robotech_sim.gif
Normal file
BIN
assets/images/robotech_sim.gif
Normal file
Binary file not shown.
After Width: | Height: | Size: 1.2 MiB |
|
@ -9,7 +9,7 @@ menu:
|
|||
sidebar:
|
||||
name: AGV Motion Planning
|
||||
identifier: agv-hsky
|
||||
weight: 1
|
||||
weight: 6
|
||||
tags: ["Basic", "Multi-lingual"]
|
||||
categories: ["Basic"]
|
||||
---
|
|
@ -9,7 +9,7 @@ menu:
|
|||
sidebar:
|
||||
name: Automated Poker Table
|
||||
identifier: automated-poker-table
|
||||
weight: 2
|
||||
weight: 3
|
||||
tags: ["Basic", "Multi-lingual"]
|
||||
categories: ["Basic"]
|
||||
---
|
|
@ -2,16 +2,16 @@
|
|||
title: "Mobile Manipulation"
|
||||
date: 2025-01-02T09:00:00+00:00
|
||||
description: Introduction to Sample Post
|
||||
hero: images/error_plot.png
|
||||
hero: images/robot_states.png
|
||||
author:
|
||||
image: /images/sharwin_portrait.jpg
|
||||
menu:
|
||||
sidebar:
|
||||
name: Mobile Manipulation
|
||||
identifier: mobile-manipulation
|
||||
weight: 0
|
||||
tags: ["Basic", "Multi-lingual"]
|
||||
categories: ["Basic"]
|
||||
weight: 1
|
||||
tags: ["Python", "CoppeliaSim", "Odometry", "Omnidirectional Robot Kinematics"]
|
||||
# categories: ["Basic"]
|
||||
---
|
||||
|
||||
This project incorporates several robotics concepts to perform a pick and place task in simulation using a mecanum-wheeled mobile robot with a 5 degree-of-freedom robot arm.
|
||||
|
|
BIN
content/posts/napkin-ai/graph_structure.jpg
Normal file
BIN
content/posts/napkin-ai/graph_structure.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 1.5 MiB |
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: "Napkin.AI"
|
||||
date: 2025-01-05T09:00:00+00:00
|
||||
description: Introduction to Sample Post
|
||||
date: 2024-02-03T09:00:00+00:00
|
||||
description: Napkin.AI Hackathon Project
|
||||
hero: images/error_plot.png
|
||||
author:
|
||||
image: /images/sharwin_portrait.jpg
|
||||
|
@ -9,7 +9,79 @@ menu:
|
|||
sidebar:
|
||||
name: Napkin.AI
|
||||
identifier: napkin-ai
|
||||
weight: 4
|
||||
tags: ["Basic", "Multi-lingual"]
|
||||
categories: ["Basic"]
|
||||
---
|
||||
weight: 10
|
||||
tags: ["Python", "PyTorch", "Retrieval-Augmented Generation", "Knowledge Graphs", "Language Models"]
|
||||
# categories: ["Basic"]
|
||||
---
|
||||
|
||||
<!-- # Napkin AI  -->
|
||||
A fast, lightweight graph retrieval-augmented generation tool for navigating codebases.
|
||||
|
||||
**Update: Winner of the Best Developer Tool sponsored by [Warp](https://www.warp.dev/)**
|
||||
|
||||
## 💡 Inspiration
|
||||
As developers, we are always exploring new projects and open-source repositories. However, the complexity and scale of these codebases serve as a major barrier to entry even for the most experienced developers. Even after spending countless hours reading over files, documentations, and (hopefully solved) issues, it's easy to find yourself lost and intimidated. We wanted to create a tool that would help developers navigate these codebases more effectively, answering questions and providing context on both specific and broad aspects of the codebase.
|
||||
|
||||
Modern large language models struggle with hallucinations, lack of specificity, and lack of context. We hope to address these shortcomings by using a knowledge graph to provide context and information to the model, allowing it to better understand the codebase and answer questions more effectively.
|
||||
|
||||
## 🙌 What it does
|
||||
Napkin is a graph retrieval-augmented generation tool that converts a codebase into a knowledge graph containing detailed relationships between all files, functions, and classes. This knowledge graph is then used to answer questions about the codebase, automatically providing relevant context and information to queries made by the user to a fine-tuned LLM.
|
||||
|
||||
The basic workflow of Napkin is as follows:
|
||||

|
||||
The process begins when a user asks a question about a codebase. The question then is embedded into a vector using a pre-trained model. The vector is then used to query the knowledge graph, which returns a set of nodes that represent the files, functions, and classes that are most relevant to the question. The nodes are then used to augment the original prompt and generate a response to the question using a fine-tuned language model. The response is then returned to the user, providing them with the information they need to understand the codebase better.
|
||||
|
||||
## 👷 How we built it
|
||||
As shown in the prior section, Napkin has a few key moving parts:
|
||||
1. Custom built python knowledge graph parser
|
||||
2. LLM fine tuning via Hugging Face
|
||||
3. Fine-tuned embedding model
|
||||
4. Graph retrieval algorithm
|
||||
5. Prompt augmentation engine
|
||||
|
||||
We built Napkin's knowledge graph parser using Python to best fit our specific needs. Our parser is built with the help of [ast](https://docs.python.org/3/library/ast.html), a powerful module that allows for the parsing of python code into an abstract syntax tree (AST). We used this library to parse each file in the codebase into an AST, and then iterated through the AST to identify the relationships between files, classes, and functions. We then used this information to construct a graph, with nodes representing files, classes, and functions, and edges representing the relationships between them, such as imports, definitions, and calls.
|
||||
|
||||
An example of the graph structure can be seen here:
|
||||

|
||||
|
||||
We fine-tuned a [Llama 7B model](https://huggingface.co/meta-llama/Llama-2-7b) from Hugging Face to better understand codebase questions and provide more relevant answers. We chose this model because it is lightweight and fast, and because it has been fine-tuned on a large dataset of codebase questions and answers.
|
||||
|
||||
We also fine-tuned an embedding model to better represent the relationships between nodes in the knowledge graph. We used the [CodeBERT](https://github.com/microsoft/CodeBERT) model to generate embeddings for each node in the graph, and then fine-tuned the model to better represent the relationships between nodes.
|
||||
|
||||
We implemented a custom graph retrieval algorithm that uses the embeddings generated by the embedding model to find the most relevant nodes in the knowledge graph to a given query. We used a greedy heuristic search that maximized cosine similarity at each step to find the most relevant nodes, and then returned the set of nodes that were most relevant to the query. This means that we started with the node that was most similar to the query, and then iteratively added the most similar nodes to the set of nodes until we filled our context.
|
||||
|
||||
Finally, we implemented a prompt augmentation engine that used the set of nodes returned by the graph retrieval algorithm to augment the original prompt and generate a response to the query using the fine-tuned LLM.
|
||||
|
||||
## 🛑 Challenges we ran into
|
||||
We began our project hoping to implement one of two preexisting models for knowledge graph generation from python repositories. The first was Google research's [python-graphs](https://github.com/google-research/python-graphs), which excels at creating in-depth control flow graphs and program graphs from functions in python. However, we quickly realized that this model was not well-suited for our purposes, as it was designed primarily for functions and single files at largest, and would not be able to handle the scale of a full codebase. Our model relies on a thorough understanding of the entire codebase and the relationships between distinct files, functions, and classes, which python-graphs was not designed to handle. The next model we attempted to use was IBM Wala's [graph4code](https://github.com/wala/graph4code), which builds upon models such as python-graphs by explicitly modeling library calls, following data flow across functions, and simulating function calls. Despite its appeal, the model was not as well-documented and we spent many hours attempting to implement it towards the beginning of the hackathon only to choose another route. In the end, we decided that neither of these models would be able to handle the scale and complexity of a full codebase, and that we would need to build our own knowledge graph creation tool from scratch.
|
||||
|
||||
Upon constructing the graph by recursively traversing the AST of every file in the repository, we then discovered that we needed to perform a significant amount of cleaning. Nodes which provided minimal amounts of data, such as small functions with no children, did not provide enough value to warrant keeping in the graph, as their parent function/class/file would already contain all of the relevant information. We also needed to remove any nodes such that the raw text they contained was too large for the model to reasonably handle. This process of parsing, pruning, and later serializing our graph took a significant amount of time and effort we hoped to have spent implementing a preexisting model.
|
||||
|
||||
Ultimately, we struggled to get our model to perform as well as we had hoped. This could have been due to limitations in the model itself, a lack of training data, or a fault in our graph parsing implementation.
|
||||
|
||||
## 😁 Accomplishments that we're proud of
|
||||
We're proud of our hard work pulling together a variety of different tools, models, and algorithms to create a cohesive and effective tool for navigating codebases. We're particularly proud of our knowledge graph parser, which we built from scratch and which we believe is a powerful and effective tool for understanding the relationships between files, classes, and functions in a codebase for applications beyond our immediate project.
|
||||
|
||||
We're also glad with the balance of accuracy and efficiency in our graph retrieval algorithm. We're extremely proud of the way that we were able to pull all of these tools together into a cohesive and effective tool for navigating codebases, and we're excited to see where it goes from here.
|
||||
|
||||
## 🧑🎓 What we learned
|
||||
We learned the about the challenges of working on a full machine learning pipeline that requires data cleaning, embedding, and fine-tuning. At every step of the way, we had a variety of tough decisions to make that stretched us and led to valuable discussions about the best way to proceed.
|
||||
|
||||
We also gained meaningful experience building our own complex graph creation and retrieval algorithms, which was a new and exciting challenge for us. We really enjoyed pursuing this open ended question as there were a huge variety of approaches we could have taken.
|
||||
|
||||
## 🔮 What's next for Napkin
|
||||
There are four main ways for Napkin to enhance and fine tune its performance:
|
||||
1. The knowledge graph
|
||||
2. Chatbot fine tuning
|
||||
3. Graph retrieval fine tuning
|
||||
4. Embedding fine tuning
|
||||
|
||||
Firstly, the knowledge graph could be improved by increasing the capacity for nodes and edges, particularly for more categories than just files, classes, and functions.
|
||||
|
||||
Secondly, the chatbot could be fine-tuned to better understand how to answer codebase questions and provide more relevant information. This would require some prompt engineering to best present the information to the chatbot, as well as extra model training with a larger dataset of codebase questions and answers.
|
||||
|
||||
The graph retrieval algorithm we implemented was based on a greedy heuristic search that maximized cosine similarity at each step, which was just one of many options. For example, we discussed pursuing Monte Carlo Tree Search as our main algorithm and adding weighted edges depending on the type of relationship between components to encourage the model to traverse the graph in a more meaningful way.
|
||||
|
||||
Finally, the embeddings used to represent the nodes in the graph could be fine-tuned to better represent the relationships between nodes. This would require a more sophisticated model to be used to generate the embeddings, as well as a more sophisticated method of training the model to best represent the relationships between nodes.
|
||||
|
||||
Furthermore, we would love to see Napkin expanded to other languages such as Java, C, Golang, and Rust, as well as to a broad types of codebases such as web applications, mobile applications, and more. The potential for Napkin to be used as a tool for developers to navigate and understand codebases is immense, and we are excited to see where it goes from here.
|
||||
|
|
BIN
content/posts/napkin-ai/napkin_logo.png
Normal file
BIN
content/posts/napkin-ai/napkin_logo.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 6.2 KiB |
BIN
content/posts/napkin-ai/pipeline.jpg
Normal file
BIN
content/posts/napkin-ai/pipeline.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 2 MiB |
|
@ -9,7 +9,7 @@ menu:
|
|||
sidebar:
|
||||
name: Pen Thief
|
||||
identifier: pen-thief
|
||||
weight: 4
|
||||
weight: 11
|
||||
tags: ["Basic", "Multi-lingual"]
|
||||
categories: ["Basic"]
|
||||
---
|
|
@ -9,7 +9,7 @@ menu:
|
|||
sidebar:
|
||||
name: Robotech
|
||||
identifier: robo-tech
|
||||
weight: 4
|
||||
weight: 9
|
||||
tags: ["Basic", "Multi-lingual"]
|
||||
categories: ["Basic"]
|
||||
---
|
15
content/posts/slam-simulation/index.md
Normal file
15
content/posts/slam-simulation/index.md
Normal file
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: "Diff-Drive SLAM with Nav2"
|
||||
date: 2025-01-05T09:00:00+00:00
|
||||
description: Introduction to Sample Post
|
||||
hero: images/error_plot.png
|
||||
author:
|
||||
image: /images/sharwin_portrait.jpg
|
||||
menu:
|
||||
sidebar:
|
||||
name: SLAM with Nav2
|
||||
identifier: slam-simulation
|
||||
weight: 7
|
||||
tags: ["Basic", "Multi-lingual"]
|
||||
categories: ["Basic"]
|
||||
---
|
|
@ -9,7 +9,7 @@ menu:
|
|||
sidebar:
|
||||
name: Toastbot
|
||||
identifier: ToastBot
|
||||
weight: 1
|
||||
weight: 2
|
||||
tags: ["Basic", "Multi-lingual"]
|
||||
categories: ["Basic"]
|
||||
---
|
|
@ -28,15 +28,15 @@ projects:
|
|||
repo: https://github.com/Sharwin24/Mobile-Manipulation
|
||||
url: "/posts/mobile-manipulation/"
|
||||
summary: Simulating a pick and place task with the KUKA YouBot using a task-space feed-forward PID controller.
|
||||
tags: ["Mobile Robotics", "Python", "CoppeliaSim"]
|
||||
tags: ["Python", "CoppeliaSim", "Odometry", "Omnidirectional Robot Kinematics"]
|
||||
|
||||
- name: Toasting Bread with a Franka Robot Arm
|
||||
role: "ME495: Embedded Systems Final Project"
|
||||
timeline: "Nov 2024 - Dec 2024"
|
||||
repo: https://github.com/snydergi/ToastBot
|
||||
url: "/posts/toastbot/"
|
||||
summary: A robotic system that toasts bread using a Franka Emika Panda robot arm.
|
||||
tags: ["Python", "ROS", "Robotics", "Franka Robot Arm"]
|
||||
summary: A system that toasts bread using a Franka Emika Panda robot arm and an Intel Realsense Camera.
|
||||
tags: ["Python", "ROS", "Moveit API", "Intel Realsense"]
|
||||
|
||||
- name: Automated Poker Table
|
||||
image: /images/bike_dealer.jpg
|
||||
|
@ -53,7 +53,7 @@ projects:
|
|||
timeline: "Sept 2021 - Dec 2021"
|
||||
url: "/posts/chess-robot/"
|
||||
repo: "https://github.com/Connor205/Chess-Robot-NURobotics"
|
||||
summary: A gantry robot with a camera that plays chess against a human opponent.
|
||||
summary: A gantry robot equipped with a camera to autonomously play and teach chess to human opponents.
|
||||
tags: ["Python", "OpenCV", "Arduino", "Stepper Motors"]
|
||||
|
||||
- name: Robot Arm Educational Kit
|
||||
|
@ -62,8 +62,8 @@ projects:
|
|||
timeline: "May 2022 - May 2024"
|
||||
url: "/posts/robot-arm-edu/"
|
||||
repo: https://github.com/Shawin24/
|
||||
summary: A 3D-printed robot arm kit for educational purposes.
|
||||
tags: ["3D Printing", "Arduino"]
|
||||
summary: A 3D-printed robot arm kit for educational purposes. Coupled with a software package intended for students with little to no experience coding to use.
|
||||
tags: ["3D Printing", "Arduino", "C++"]
|
||||
|
||||
- name: AGV Odometry & Motion Planning
|
||||
image: /images/pure-pursuit.png
|
||||
|
@ -83,11 +83,12 @@ projects:
|
|||
tags: ["Python", "ROS", "Gazebo", "Nav2"]
|
||||
|
||||
- name: Autonomous Drone Swarm Simulation
|
||||
image: /images/robotech_sim.gif
|
||||
role: "RoboTech 2022 Hackathon Project"
|
||||
timeline: "April 2022"
|
||||
url: "/posts/robo-tech/"
|
||||
repo: https://github.com/Sharwin24/RoboTech
|
||||
summary: A simulation of a swarm of drones using ROS and Gazebo
|
||||
summary: A simulation of a swarm of drones cleaning algal blooms in a lake using RRT and A* path planning.
|
||||
tags: ["Python", "RRT", "A*"]
|
||||
|
||||
- name: Robot Pen Thief
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue