Haptic Glove Update — March 21, 2017

Haptic Glove Update

The haptic glove project is progressing nicely!

At the beginning of the year, we decided to branch out a bit from our previous design to explore other solutions. However, we have returned to a largely similar design with some key modifications. The new glove is almost entirely made of 3d-printed parts, uses new communication protocols and tracking algorithms, and now integrates with Unity!

We have already printed most of the glove and have been working on mounting components. The next steps are finishing up the finger exoskeleton design and integrating it with the rest of the glove. The exoskeleton should be one of the more interesting improvements in this prototype. Instead of extending the Exo along the sides of the fingers to create joints, it now runs exclusively across the top. The new mechanism creates what we call “virtual joints” outside of its physical structure that allows it to move freely with the finger while maintaining its internal structure. This allows us to create pressure just at the fingertip without using the user’s own fingers for leverage.

On the software side, we are currently focusing on smoothing out the motion of the servos and improving the reliability of the force measurements. Instead of modifying servos to read their position, we are using external force sensors. While it adds a bit of complexity, it greatly reduces the twitchiness of the glove. This also means our communication system is a lot cleaner. Now, instead of sending full coordinates and force data back and forth, we just send position values to Unity and Unity responds with a target force. Unity builds a model finger from the glove’s positional tracking and calculates the required force on each joint that will result in a realistic experience. This system cuts back on the amount of data that is sent back and forth and improves the speed of the overall simulation.

We are finishing up integrating all of these parts and hope to have a full demo online by mid-April.

New Links Page! — March 12, 2017

New Links Page!

Check out my new Links page at the top of the site!

I am working on a few web projects and decided to have a common place to link to them. I will add to the list as I produce more.

You’ll notice that the first link is a web version of the Letter By Letter Visualization that I discussed in an earlier post. This is the first of a series of text visualizations that I am working on putting online. I am planning on setting up a site to sell custom art pieces generated from customer-submitted text once I have a few viz’s together.

Let me know if there are any specific visualizations I have done that you’d like to be able to experiment with online!

BEAM Programming Camp — September 26, 2016

BEAM Programming Camp

Earlier this summer, my friend Eli and I planned and taught a 1.5 week programming camp to a group of middle/high schoolers. Some of them had no programming experience, some had looked into some code, and some had taken full computer science classes. We created a camp that tried to cater to everyone and teach them all something new.

We taught a combination of software and hardware development. Starting off with Processing, the campers created a basic runner game with obstacles that the player had to jump over. They learned all of the basics of Object Oriented Programming and simple Java syntax.

Building off of this, we moved into hardware development with the Arduino. Programming in Processing and Arduino are very similar and it actually isn’t very hard to use them together. This allowed us to, after a quick intro and basic Arduino lesson, make game controllers with the Arduino for our Processing games.Camp_wk1_4

The students created a runner game, a brick breaker game, and a projectile dodging game. After students created the Arduino controllers, we moved them onto using 3rd party libraries to use their Xbox controllers with their games. IMG_0547

Finally, the students focused on finishing one of the games of their choice. Some made high score systems, some themed their games (Breaking Bad themed brick breaker), and some added bullets to the dodging game. A few students even made their games 3D.

 

Terrain Generation — August 28, 2016

Terrain Generation

My third series in my AP Portfolio was on terrain generation. Computers mainly use terrain generation for simulations and games. For each of the visualizations in this series, I variations on the “midpoint displacement algorithm.”  The idea behind this algorithm is that you can add layers of detail to a data set by adding values based on the averages of the surrounding values. For example, if you have two points with the values 10 and 20, you can add another point at 15. By adding in a bit of random variation, a computer can create realistic terrain from only two starting points. This is called the midpoint displacement algorithm.

One Dimension: Noise

The first version of this is generating noise. At each iteration, the computer creates new points in between pairs of existing points.test

Over multiple iterations, this creates a more and more detailed set of points spaced at different distances. This can be audiolized as tones (hence the name “noise”) or can just be left as a series of randomly spaced points.

My final piece for this visualization included some extra colored circles for context:

End9.png


Two Dimensions: Path

In two dimensions, instead of varying the spacing, the algorithm varies the height of each point. With some coloring, this eventually creates a detailed 2D terrain like the bottom layer:Terrain Generation Algorithm

To better illustrate this, I laid out the algorithm’s steps by color:End56


Three Dimensions: Terrain

The midpoint displacement algorithm is most commonly used for 3D terrain generation. In this form, it is called the diamond-square algorithm.

Diamond_Square.svg

As shown in the above image, the algorithm takes a series of alternating diamonds and squares and fills in the center point of each. By repeating this hundreds of times, you end up with a 3D terrain map.

For my final piece on 3D terrain generation for my concentration, I again tried to combine each step in the algorithm into the final image. This image is shown to the right. Each step is displayed, stacked on top of the previous and denoted by a difference in the color. You can see how it gets more and more detailed as the algorithm progresses.

Terrain Generation 3-dimensions (1)


Four Dimensions: Space

This is where this all gets a little bit confusing. The first three dimensions of this algorithm are somewhat straightforward. They have basic goals and display methods. The fourth dimension, time, is a bit harder to picture. To make it a bit easier, imagine the fourth dimension is color. In each previous visualization, only the last dimension is affected by the algorithm. For the terrain, the x and y positions of each pixel are set, only the height of the pixel is changed. Extending this to the fourth-dimensional visualization, the position is completely locked and only the color is changed. Two different examples:

 

These are essentially clouds of color formed by what I call the Octo-Hexahedron algorithm. Essentially, it is a 3D extension of the diamond-square algorithm. These are full cubes filled with clouds of color. The fourth dimension can also be represented by transparency.

Screenshot 2016-08-24 23.36.18


Finally, you can actually represent the fourth dimension with time. This is in the form of an animation:


To tie these all together, I made a video with both the color and time-based animation:


Hexahedron-Octohedron AlgorithmFor the final still image for my concentration, I had to once again combine steps into one image. I used the same stacked approach with color changes to show time progression.


I had a ton of fun making this collection of pieces and exploring the midpoint displacement algorithm and its uses.

If you have any questions, comment below! Thanks!

 

Fabric Simulation — August 8, 2016

Fabric Simulation

For my AP Computer Science final project, I created a fabric simulator from scratch in Processing. This idea spawned from my HAPTIX project. I wanted a way to better model the full 3D positioning of a glove with only a few fabric stretching measurements.

I designed the simulator to be able to simulate different stitching patterns. This is so I could use it to simulate any combination of materials for the glove project.

I tested the simulation by using a ball to push the virtual fabric around.

Screenshot 2016-08-07 21.07.28

The fabric was made up of a matrix of “nodes” with connecting stretchy “links.” It also included stiffness links that keep the fabric from bunching up. Essentially, these stiffness links represent the thickness and actual matter of the fabric in the real world. In the next two pictures, the black links are stretch links and the red links are stiffness links. The stiffness links actually span over two nodes in order to prevent the fabric from folding right in half.

Next year, in my Projects in CS class, I hope to use this simulation to create real time accurate hand models for HAPTIX.

Visualization Portfolio: Letter by Letter — March 18, 2016

Visualization Portfolio: Letter by Letter

The next series in my data visualization project was on how computers understand text. As humans, we read text as words and groups of words that we call phrases. In the english language and many others, these are made up of characters that we call letters. Computers see text as individual characters that are actually just represented by numbers. For this series, I wanted to illustrate the difference in how we interpret language and show the benefits of reading it like computers do.

Basic Visualization Explanation

In this visualization, each word is represented by a curving line. As the line travels along the image, it flows vertically based on the letters in the word. For example, here is the word “language.”

comparison10

The alphabet is organized with ‘a’ at the bottom and ‘z’ at the top. The letters are at the flat sections or “plateaus” on the line. In this visualization, you can clearly see the two ‘a’ characters as the lowest two points in the diagram.

The English Dictionary

The first set of words I visualized was the english dictionary. I used the Merriam-Webster dictionary and set a maximum word length of 28 characters (the length of the word antidisestablishmentarianism).

map7

This is the exact same format as the first image. I made each line a bit transparent so that you can see them stack up along popular letters. This is why it also appears to fade out. If you click on the image and zoom in, you can see the longer words almost all on their own on the right side of the image.

There are some interesting patterns that we can see in this image. Below, I have highlighted the positions showing vowels with red lines and the position of ‘s’ with a green line.

map7vowels

We can see that these letters are popular based on the concentration of lines going to and from them. Another very interesting phenomenon is that of the letter ‘q’. It is almost always followed by the letter ‘u’ and we can see this very clearly in the visualization. The middle letter below is ‘q’. Almost every time ‘q’ is used, it is followed by ‘u’.

map7 insert

 

The Declaration of Independence

The next set of words was the Declaration of Independence. This is where we get into the actual usage of the english language.

declaration

This is at a different scale than the above visualizations due to the fact that we don’t actually use 28-letter words in normal writing or talking. Another thing that is different about this visualization is that there is an added row at the top of the image that denotes punctuation. With this line, we can see things like the length of the last word of each sentence. In the Declaration of Independence, sentences tend to end with 5 to 8 letter words. There are also no words that start with x, y, or z. The letter ‘q’ is almost never used and neither is the letter ‘j’. Words actually end in the letter ‘y’ fairly often.

We can also compare different sets of words. Here is the Declaration of Independence (white lines) and the English dictionary (black lines):

comparison8

From this, we can see that we don’t usually use the last quarter (top) of the alphabet much and that, in actual writing, we don’t often use longer words.

 

Language Comparison

With comparison of different texts, the next step is comparing languages. I used the Declaration of Independence (English: blue) as my reference document and translated it into Somali (green) and French (red).

comparison8

Click here for an exploded view of the languages

The first pattern I noticed in this was the french use of the accent after the first letter of a word. If you look at the second letter (second plateau) and look at the top of the image, there is a collection of red lines going to the row that represents punctuation. This is the visual representation of french words like d’Amerique, l’abolir, and l’ont. This pattern of punctuation is not very common in many languages so it stands out on this visualization.

Next, if you look at the bottom of the visualization, you can see that there is a heavy concentration of green lines leading to ‘a’. This is the visual representation of the commonality of the letter ‘a’ in the Somali language.

One final pattern is related once again to the letter ‘q’. In the Somali language, the letter ‘q’ is not always followed by ‘u’. In the image below, the blue and red lines (english and french) follow the “qu” pattern but the green line (Somali) does not.

comparison8q

 

I wrote this visualization in order to show how computers look at language, but have realized that it can actually help us understand our language as a whole and its relation to other languages. Using patterns found in different languages and even in the writing of different authors, we can actually use this sort of visualization to recognize writing styles and languages in a visual manner.

 

Just for fun, here is a visualization of this blog post:

comparison21

(Guess what the longest word is….)

 

Comment below if you see any other interesting patterns that I didn’t point out!

Visualization Portfolio: Sorting — March 11, 2016

Visualization Portfolio: Sorting

As part of the AP Studio Art course that I posted about a few months ago, I will be submitting a portfolio to the College Board in a few months. Currently I am working on my concentration, a set of 12 images focused around a central theme or topic. I have chosen to focus on visualizing computer algorithms and the patterns behind them. Here are a few of the algorithms I focused on for the first few pieces.

Sorting Algorithms

Because of their prevalence in computer science, I looked at sorting algorithms for my first few pieces. I started off by pulling together a few algorithms in Processing. I created an implementation of bubble sort and quicksort and wrote an algorithm of my own (similar to an insertion sort). I then stored their progress at each step of the way while they sorted a random list.

This is a static visualization of a bubble sort. In this piece, time is represented vertically and position in the list is represented horizontally. Value is represented by color. At the beginning of the sorting process, the list is completely scrambled (at the top) and once the algorithm is done working (at the bottom), it is completely sorted. This algorithm works by switching pairs of numbers that are out of order in the list. This is what produces the diagonal lines in the visualization.

end (1)

Using different sorting algorithms and visualization methods, I found patterns and interesting visual effects that are purely based on how the computer is “thinking.” Below is a different visualization of an insertion-style sort. See the video link in the caption too!

end
Video of computer in action on this sort.

For quicksort, I handmade a piece to show the patterns behind it. This was done with paint on glass and photographed with tissue paper added.

quicksort.jpg

Finally, to tie all of these algorithms together into a single piece, I combined two visualizations of each algorithm and a few contextual labels and numbers. The numbers across the top of the picture below are the list before it was sorted and the numbers at the bottom are the sorted list. The sorting algorithms from left to right are: bubble sort, quicksort, and my personal version of an insertion sort.

This was a giant picture and something I would love to have printed very large in order to show the intricacies of the algorithmic visualizations.

hi res.png
Click to get a larger version of the image

Below is a close up of a part of the top left visualization to show a bit more detail.

hi res copy2

So those are my sorting algorithm visualizations! I will be posting about more art/computer projects hopefully in the next few days as I wrap up a couple new pieces.

Thanks for checking these out!

Using an ultrasonic sensor to test the glove — January 11, 2016

Using an ultrasonic sensor to test the glove

I read an article the other day about using a very simple haptic glove to sense objects that someone cannot see. The use for this specific glove is rescue workers in floods that need to sense underwater objects. I thought this was pretty interesting and it reminded me of a Y-combinator hardware hackathon project that was a proof of concept of haptic technology. I think, using an ultrasonic range finding array, this could be a good proof of concept for the different glove ideas. Next year, I am trying to take a class in school called Projects in CS where you present an idea and then spend the semester working on it. Since this is a very limited timeframe, a simpler proof of concept project like this one could be perfect.

AP Studio Art I — November 1, 2015

AP Studio Art I

I’m taking AP Studio Art at school this year and I thought I’d post some of the things I’ve been working on. These are two of my highlights from the first month or so of the class.

DROPS AND SPLICES

image48955 image28606 image28212 image36645

These are from a series I worked on with algorithmic art. They were all generated by a program I wrote using Processing. If you would like to look at the code, email me at mccannemmett@gmail.com


EYE GLASS

File_002 IMG_9787

These are a few different shots of a jar I painted and photographed. I painted on the bottom and sides of the jar and then used different lighting styles to alter the coloration and focus. I did not edit these shots.

IMG_9775

Just a few examples of projects I am working on! I’ll post some more soon!

Projector Add-on — September 15, 2015

Projector Add-on

I have been thinking a lot about an add-on for a projector. My idea would consist of a camera attached to the projector and wired back to the source computer. This could be an open source system with possibly 3D printed clips for the camera. It is a very simple hardware set up with many possibilities of features.

The original idea’s main feature was a color adjustment system. The camera could compare the projected image to the source image from the computer and notice any differences.This system would allow you to project onto any flat surface regardless of color variation. Say if there was a post-it note on the projection surface. This would cause a small part of the projection to appear as a different color. The system would notice that and modify the projected image to make the post-it “disappear.”

After thinking about this post-it concept for a while, I realized you could do even more with the camera’s ability to recognize things on the projection surface. With a specifically formatted sticky note, you could have the system recognize and highlight that note. You could even have it recognize and convert the handwriting on the note to text. This could be a nice form of input for meetings.

Another piece of this idea is one that has been tried by some companies is to make the system recognize hands. This means that you could effectively turn the projection surface into a giant touchscreen. This would be useful for a large number of purposes.

Overall, the system could be taught to recognize any number of situations and change the projected image based on these situations. By using gestures, handwriting, and color specific changes to the surface, a single person or a whole group could learn to work with the projection.

Does anyone have any other ideas on ways to interact with this sort of system?