Conor

Conor Russomanno

Neurotechnologist & Entrepreneur

Bio

I come from a mixed background of art, engineering, and design. As an undergraduate at Columbia University, I studied civil engineering & engineering mechanics while teaching computer graphics and developing Unity-based virtual environments under NSF funding. I later discovered brain-computer interfacing (BCI) as a Design & Technology MFA student at Parsons School of Design. I have been tirelessly pushing the industry of BCI forward ever since, making technologies for recording brain activity more cost-effective and accessible to everybody. Having led two successful crowdfunding campaigns, raising close to $500,000, I now spend most of my time building OpenBCI. I also love teaching. I recently taught Creative Coding, Physical ComputingDesigning Consciousness, and a number of other courses at Parsons School of Design. I now teach a course titled Neuromachina: Man & Machine at NYU Tisch School of the Arts.

CV

Work Experience

OpenBCI | Co-Founder & CEO
Brooklyn, NY (June 2013 — Present)

New York University ITP | Adjunct Faculty & “Something In Residence”
New York, NY (Jan 2016 — Present)

  • Courses taught: The Body Electric, Neuromachina: Man & Machine

Parsons School of Design (MFADT) | Adjunct Faculty
New York, NY (Sep 2013 — Dec 2016)

  • Courses taught: OpenBCI: Brain Hacking, Creativity & Computation (JS/Java/Arduino), The Digital Self: Interfacing the Body, Materials Spectrum Lab, Physical Computing, Designing Consciousness, Creative Coding (openFrameworks/C++)

NeuroTechNYC | Founder & Organizer
New York, NY (Jul 2015 — Present)

  • Coordinate monthly hack nights centered around the use of human-computer interface technologies

Felix Intelligent Local Advertising | Front-End Engineer
New York, NY (Jul 2013 — Dec 2013)

  • Designed and implemented internal browser-based dashboards and client-facing sites using Javascript, HTML, and CSS

Brain Interface Lab | Founder & Director
New York, NY (Oct 2012 — June 2013)

  • This is where my BCI journey began and also where the OpenBCI logo originates from

Education

Parsons School of Design | M.F.A. Design & Technology
New York, NY (Aug 2011 — May 2013)

  • Concentrations: brain-computer interfaces, creative coding, physical computing, game design, & illustration

Columbia University | B.S. Civil Engineering & Engineering Mechanics
New York, NY (Aug 2007 — May 2011)

  • Concentrations: project management, 3D-modeling, computer graphics
  • Led the 3D content creation of a Unity-based virtual world (aka CyberGRID) under an $750M NSF grant

Thomas Jefferson High School for Science & Technology
Alexandria, VA (Aug 2003 — May 2007)

  • Ranked #1 Public High School in the U.S. by U.S. News & World Report (2007)

Blog

  • 3D printed EEG electrodes! (2/16/2015)

    I spent the day messing around with 1.75mm conductive ABS BuMat filament, trying to create a 3D-printable EEG electrode. The long-term goal is to design an easily 3D-printable EEG electrode that nests into the OpenBCI “Spiderclaw” 3D printed EEG headset.

    I decided to try to make the electrode snap into the standard “snappy electrode cable” that you see with some industry-standard EMG/EKG/EEG electrodes, like the one seen the picture below.

    IMG_3583

    After some trial and error w/ AutoDesk Maya and a MakerBot Rep 1, managed to print a few different designs that snap pretty nicely into the cable seen above. At first, Joel (my fellow OpenBCI co-founder), and I we’re worried that the snappy nub would break off, but, to our pleasant surprise, it was strong enough to not break with repeated use. Though the jury is still out since we’ve only repeatedly snapped for 1 day.

    Here you can see a screenshot of the latest prototype design in Maya. I added a very subtle concave curvature to the “teeth” on the underside of the electrode so that the electrode will hopefully make better contact with the scalp.

    Screen Shot 2015-02-16 at 6.17.48 PM

    Here is a photo of a few different variations of the electrodes that we’re actually printed over the course of the day.

    IMG_3581

    FullSizeRender (3)

    I’d like to note that I printed each electrode upside-down, with the pointy teeth facing upward on the vertical (Z) axis, with a raft and supports, as seen in the picture below.

    Screen Shot 2015-02-16 at 6.35.01 PM

    I tested each of the electrodes with the OpenBCI board, trying to detect basic EMG/EEG signals from the O1/O2 positions on the back of the scalp—over the occipital lobe. I tried each electrode with no paste applied—simply conductive filament on skin. And then I tried each electrode with a small amount of Ten20 paste applied to the teeth. To my pleasant surprise, without applying any conductive Ten20 paste, I was able to detect small EMG artifacts by gritting my teeth, and very small artifacts from Alpha EEG brain waves, by closing my eyes. Upon applying the Ten20 paste, the signal was as good (if not better) than the signal that is recorded using the standard gold cup electrodes that come with the OpenBCI Electrode Starter Kit! Pretty awesome!

    Here’s a screenshot of some very faint alpha (~10Hz) that I was able to pick up without any Ten20 paste applied to the electrode, with an electrode placed over the O2 node of the 10-20 system!

    OpenBCI-2015-02-16_14-39-10

    And here’s a screenshot of some very vibrant alpha (~10Hz) that I was able to detect with Ten20 paste applied to the 3D-printed electrode!

    OpenBCI-2015-02-16_17-27-57

    The signal looks pretty good. Joel may begin messing around with an active amplification hardware design that works with the any 3D-printed snappy electrode design.

    In case you’re interested in printing your own, here’s a link to the github repo with the latest design of the electrode!

    More on this coming soon!

  • OpenBCI Graphical User Interface (GUI) (12/3/2014)
    PowerUpBoard

    [Image 1] — The OpenBCI Board (with which the OpenBCI GUI interfaces)

    Over the course of the late summer and early fall I worked extensively on the OpenBCI Graphical User Interface (GUI). The first version of the application, as seen in [Image 2] below, was developed by Chip Audette, who is one of the biggest OpenBCI contributors and runs the amazing blog EEG Hacker. The GUI is developed in Processing, a Java-based creative coding framework.

    OpenBCI-2014-09-20_13-04-02

    [Image 2] OpenBCI GUI – Version 1

    I worked on:

    • [Image 3] updating the design & user experience (w/ the help of Agustina Jacobi)
    • [Image 4] adding a UI controller to manage the system state (initial hardware settings, startup, live data streaming mode, playback mode, synthetic data mode, etc.)
    • [Image 5] adding a UI controller to manage OpenBCI board channels settings
    • the startup protocol for establishing a connection between the OpenBCI GUI and the OpenBCI Board
    • a collapsable window for adding and testing new features, called the “Developer Playground”
    • a widget at the bottom of the application that gives feedback to the user about what the system is doing

    [Image 3] OpenBCI GUI - Version2

    [Image 3] OpenBCI GUI – Version2

    [Image 3] —

    [Image 4] — UI controller to manage the system state

    Screen Shot 2015-02-17 at 3.27.52 PM

    [Image 5] — UI controller to manage OpenBCI board channels settings

    To download the latest version of the OpenBCI GUI, check out the following Github repo! Don’t hesitate to fork it, make improvements, and try out new features in the developer playground. For more information on how to get up-and-running with the OpenBCI board, check out the following getting started guide on the OpenBCI website.

  • [Make Magazine] OpenBCI: Rise of the Brain-Computer Interface (11/1/2014)

    I wrote the following article which was published in Volume 41 of Make Magazine!

    Conor wears an early prototype of the OpenBCI 3D-printable EEG Headset.

    Conor wears an early prototype of the OpenBCI 3D-printable EEG Headset.

    Conor wears an early prototype of the OpenBCI 3D-printable EEG Headset.

    This article first appeared in Make: Volume 41.

    This article first appeared in Make: Volume 41.

    During this summer’s Digital Revolution exhibition at London’s Barbican Museum, a small brainwave-influenced game sat sandwiched between Lady Gaga’s Haus of Gaga and Google’s DevArt booth. It was Not Impossible Labs’ Brainwriter installation, which combined Tobii eye tracking and an OpenBCI Electroencephalography (EEG) device to allow players to shoot laser beams at virtual robots with just eye movement and brain waves. “Whoa, this is the future,” exclaimed one participant.

    But the Brainwriter is designed for far more than just games. It’s an early attempt at using Brain-Computer Interface technology to create a comprehensive communication system for patients with ALS and other neurodegenerative disorders, which inhibit motor function and the ability to speak.

    render2

    The brain is one of the final frontiers of human discovery. Each day it gets easier to leverage technology to expand the capabilities of that squishy thing inside our heads. Real-world BCI will be vital in reverse-engineering and further understanding the human brain.

    Though BCI is in an embryonic state — with a definition that evolves by the day — it’s typically a system that enables direct communication between a brain and a computer, and one that will inevitably have a major impact on the future of humanity. BCIs encompass a wide range of technologies that vary in invasiveness, ease of use, functionality, cost, and real-world practicality. They include fMRI, cochlear implants, and EEG. Historically, these technologies have been used solely in medicine and research, but recently there’s been a major shift: As the technology becomes smaller, cheaper, and woven into the fabric of everyday life, many innovators are searching for real-world applications outside of medicine. It’s already happening, and it’s often driven by makers.

    OpenBCI 3D-printed EEG headset prototypes.

    OpenBCI 3D-printed EEG headset prototypes.

    The field is expanding at an astounding rate. I learned about it two and a half years ago, and it quickly turned into an obsession. I found myself daydreaming about the amazing implications of using nothing more than my mind to communicate with a machine. I thought about my grandma who was suffering from a neurodegenerative disorder and how BCIs might allow her to speak again. I thought about my best friend who had just suffered a severe neck injury and how BCIs might allow him to walk again. I thought about the vagueness of attention disorders, and how BCIs might lead to complementary or even supplementary treatments, replacing overprescribed and addictive medications.

    I went on to found OpenBCI with Joel Murphy as a way to offer access to every aspect of the BCI design and to present that information in an organized, collaborative, and educational way. I’m not the only one who sees the potential of this amazing new technology. But creating a practical, real-world BCI is an immense challenge — as the incredibly talented Murphy, who designed the hardware, says, “This stuff is really, really hard.” Many have attempted it but none have fully succeeded. It will take a community effort to achieve the technology’s potential while maintaining ethical design constraints. (It’s not hard to fathom a few not-too-far-off dystopian scenarios in which BCIs are used for the wrong reasons.)

    Russomanno (left) and Murphy demonstrate how to get started with OpenBCI.

    Russomanno (left) and Murphy demonstrate how to get started with OpenBCI.

    Of the many types of BCIs, EEG has recently emerged as the frontrunner in the commercial and DIY spaces, partly because it is minimally invasive and easily translated into signals that a computer can interpret. After all, computers are complex electrical systems, and EEG is the sampling of electrical signals from the scalp. Simply put, EEG is the best way to get our brains and our computers speaking the same language.

    EEG has existed for almost a hundred years and is most commonly used to diagnose epilepsy. In recent years, two companies, NeuroSky and Emotiv, have attempted to transplant EEG into the consumer industry. NeuroSky built the Mindwave, a simplified single-sensor system and the cheapest commercial EEG device on the market — and in doing so made EEG accessible to everyone and piqued the interest of many early BCI enthusiasts, myself included. Emotiv created the EPOC, a higher channel count system that split the gap between NeuroSky and research-grade EEG with regard to both cost and signal quality. While these devices have opened up BCI to innovators, there’s still a huge void waiting to be filled by those of us who like to explore the inner workings of our gadgets.

    Grant_using_OpenBCI

    UCSD researcher Grant Vousden-Dishington, working with OpenBCI at NeuroGaming 2014.

    With OpenBCI, we wanted to create a powerful, customizable tool that would enable innovators with varied backgrounds and skill levels to collaborate on the countless subchallenges of interfacing the brain and body. We came up with a board based on the Arduino electronics prototyping platform, with an integrated, programmable microcontroller and 16 sensor inputs that can pick up any electrical signals emitted from the body — including brain activity, muscle activity, and heart rate. And it can all be mounted onto the first-ever 3D-printable EEG headset.

    In the next 5 to 10 years we will see more widespread use of BCIs, from thought-controlled keyboards and mice to wheelchairs to new-age, immersive video games that respond to biosignals. Some of these systems already exist, though there’s a lot of work left before they become mainstream applications.

    The latest version of the OpenBCI board.

    The latest version of the OpenBCI board.

    This summer something really amazing is happening: Commercially available devices for interfacing the brain are popping up everywhere. In 2013, more than 10,000 commercial and do-it-yourself EEG systems were claimed through various crowdfunded projects. Most of those devices only recently started shipping. In addition to OpenBCI, Emotiv’s new headset Insight, the Melon Headband, and the InteraXon Muse are available on preorder. As a result, countless amazing — and maybe even practical — implementations of the BCI are going to start materializing in the latter half of 2014 and into 2015. But BCIs are still nascent. Despite big claims and big potential, they’re not ready; we still need makers, who’ll hack and build and experiment, to use them to change the world.

  • 3D printed EEG Headset (aka “Spiderclaw” V1) (12/17/2013)

    The following images are a series of sketches, screenshots, and photographs documenting my design process in the creation of the OpenBCI Spiderclaw (version 1). For additional information on the further development of the Spiderclaw, refer to the OpenBCI Docs Headware section and my post on Spiderclaw (version 2). If you want to download the .STL files to print them yourself or work with the Maya file, you can get them from the OpenBCI Spiderclaw Github repo. Also, if 3D printed EEG equipment excites you, check out my post on 3D printable EEG electrodes!

    10-20 System (Scientific Design Constraint)

    Concept Sketches

    3D Modeling (in AutoDesk Maya)

    3D Printing & Assembly

    Future Plans

    Headset_Interface

  • ROB3115 – A Neuro-Immersive Narrative (8/12/2013)

    In-experience screenshot

    ROB3115 is an interactive graphic novel that is influenced by the reader’s brainwaves. The experience is driven by the reader’s ability to cognitively engage with the story. ROB3115′s narrative and its fundamental interactive mechanic – the reader’s ability to focus – are tightly intertwined by virtue of a philosophical supposition linking consciousness with attention.

    ROB3115 explores the intersection of interactive narrative, visual storytelling, and brain-computer interfacing. The experience, designed for an individual, puts the reader in the shoes of a highly intelligent artificial being that begins to perceive a sense of consciousness. By using a NeuroSky brainwave sensor, the reader’s brain activity directly affects the internal dialogue of the main character, in turn, dictating the outcome of his series of psychosomatic realizations. The system is an adaptation of the traditional choose-your-own-adventure. However, instead of actively making decisions at critical points in the narrative, the reader subconsciously affects the story via their level of cognitive engagement. This piece makes use of new media devices while, at the same time, commenting on the seemingly inevitable implications of their introduction into society.

    This project was my thesis in graduating from Parsons with an M.F.A. in Design & Technology.

  • Charcoal Mike (4/28/2013)

    It was my girlfriend’s birthday and she really likes Michael Jackson. I think this is the best charcoal I’ve ever done. 🙂

    michael

  • Dot – Graphic Novel Character Design (4/1/2013)

    Dot is one of the main characters in a sci-fi graphic novel that I’ve been working on as a side project. The story largely inspired my thesis, Rob3115, which is a graphic short story about a robot. The piece is interactive and is affected in real-time by the reader’s brainwaves.

    Reel_illustrations_2

  • Brain Interface Lab (3/29/2013)

    I recently founded the Brain Interface Lab with some colleagues from Parsons MFA Design & Technology and Columbia University. The lab is dedicated to supporting the open-source software and hardware development of brain-computer interfaces. Check out our website and all of the awesome stuff that was created during our first big event titled Hack-A-Brain:

  • audioBuzzers – Audio Visualizer (Unity) (3/6/2013)

    Summary

    This is a Unity-built audio visualizer of the song Major Tom, covered by the Shiny Toy Guns.

    Project Files

    The Web Player: http://a.parsons.edu/~russc171/UnityHW/AudioBuzzers_2/AudioBuzzers_2.html

    The Unity Project: http://a.parsons.edu/~russc171/UnityHW/hw_wk5_audioBuzzers.zip

    Screenshot

    Screen Shot 2013-03-06 at 5.11.50 PM

  • Demo Reel (3/6/2013)

    DEMO REEL BREAKDOWN

    DRB

  • Plasma Ball Concentration Game (openFrameworks + Neurosky’s EEG Mindset) (12/21/2012)

    Project Summary

    This project relates to the brain-computer interface work I’ve been doing for my thesis. As I will soon be creating generative animations that responds to brain activity, which are part of a digital graphic novel, I wanted to do a prototype of a visually complex animation that was dependent on a person’s brain activity. This project was written in openFrameworks and uses a Neurosky Mindset to link a player’s attention level to the intensity of electricity being generated from a sphere in the middle of the screen. The meat of the code is a recursive function that creates individual lightning strikes at a frequency inversely proportional to the attention parameter calculated by the Neurosky EEG headset. The project was visually inspired by the tesla coil and those cool electricity lamps that were really popular in the 90s (see below).

    Once the connection between the Neurosky headset and the user’s computer has strong connectivity, the user can press the ‘b’ key (for brain) to link their EEG with the plasma ball. At any point the user can press the ‘g’ key (for graph) to see a HUD that displays a bar graph of their attention value on a scale from 0-100. The graph also shows the connectivity value of the device and the average attention value, calculated over the previous 5 seconds, being used to dictate the frequency of the electricity.

    In order to get this application working on your computer, you must first download and install the Neurosky Thinkgear connector. You should be able to get it working with any bluetooth enabled Neurosky device; I’ve documented how to do so in the readme file on my github. You can get my code for the project on my Github page here: https://github.com/crussoma/conorRussomanno_algo2012/tree/master/Conors_Final

    Also, if you just want to see the recursive electricity code working independent of a person’s EEG, download and install the app lightningBall (not lightnightBall_brain) from my github.

    Project Video

    To see this project in action check out my demo reel and jump to 35s.

    Visual Inspiration

    lightnightBall

    lightnightLamp

    Screenshots

    Screen Shot 2013-03-06 at 5.29.23 PM

    Screen Shot 2013-03-06 at 5.29.00 PM

    References

    My code uses some of the logic and algorithms Esteban Hufstedler’s processing sketch:http://www.openprocessing.org/sketch/2924

    Additionally, a big shout out to Akira Hayasaka for writing the Neurosky openFrameworks addon that I used to pull this off:  https://github.com/Akira-Hayasaka/ofxThinkGear

  • ‘Wetlands’ Architectural Renders (12/20/2012)

    Project Summary

    I spent the past 6 weeks working with the amazing and progressive artist Mary Mattingly on her project titled Wetlands. Most of her work explores the complex relationship between people and the Earth. Wetlands, currently in the design phase, is a self-sustained living environment that floats in the rivers outside of Philadelphia. The structure will be a low-cost floating barge with various components that explore DIY techniques of sustainability.

    My Role

    I worked with 2 other artists to create an architectural design for the structure that optimized the functional and design constraints. I helped with the concept drawings and took the lead on creating 3D renders of the design.

    Renders

    Project Presenation PDF: wetlands

  • Please Vote For An Awesome EEG Project! (10/26/2012)

    Please take 10 seconds to vote for my New Challenge application:

    Despite being rather silent on this blog recently, I’ve actually been quite busy. My ongoing thesis at Parsons MFA Design & Technology is an exploration of practical applications of wearable brain-computer interfaces. More on that to come.

    Recently, some fellow designers, engineers, researchers, and myself applied for an award of up to 10K to explore if wearable BCIs could be used to find complimentary or alternative solutions for people suffering from attention disorders such as ADHD. If you support this cause, please click on the image above or the following link and click the “vote” button. You could comment here, but it would be better for you to comment on the application page itself in order to prove to the judges that people truly do care about this cause.

    The application is as follows:

    Project Title: Brain Design Lab – Finding Alternative Approaches to Addressing ADHD

    People Involved:

    • Conor Russomanno (Director) – Conor is currently a 2nd Year in Parsons School of Design MFA Design & Technology Program. Conor did his undergraduate degree in engineering at Columbia University, and has been working with brain-computer interfaces for the past year. Check out his website at conorrussomanno.me.
    • Kristen Kersh – Candidate for MFA in Design & Technology at Parsons School of Design, Masters in Neuroscience and Education from Harvard University
    • James Ramadan – Received dual majors in biology and statistics from University of Virginia, currently does research in statistical analysis of quantitative EEG.
    • Amy Burns – Award winning reporter, spent more than 17 years in the multi-media industry, covering a diverse range of topics through the written word, social media, and the power of video.
    • Other members of the Brain Design Lab (our website is currently being built, braindesignlab.com)

    The Problem

    Our brains are dependent on the stimuli provided by our environment. Neuroplasticity is the notion that our neurons can be molded and re-purposed based on our experiences, even after critical stages of development. Currently, elite academic institutions such as Harvard, Columbia, and MIT are using functional magnetic resonance imaging (FMRI), magnetic resonance imaging (MRI), and electroencephalography (EEG) to research the brain’s ability to develop and change in response to stimuli. These studies have produced important findings with regards to a wide range of neurological diseases, traumatic brain injuries, and learning. In turn, these findings are being translated and applied to improved techniques in medicine, therapy, and education.

    One of the main shortcomings of interfacing the brain is the ability to attain data outside of the confinement of a laboratory setting. There are very few studies done with a patient within the context of their normal environment, looking at how their home, what they eat, smell, see, hear, and touch affects the activity within their brain. Understandably, this is a very large challenge to address. If we are honored with receiving funds from the New Challenge competition, we intend to contribute to this pervasive challenge by addressing the issues of one of its sub communities, people suffering from attention disorders that affect their ability to focus and learn.

    In 2007, the Center for Disease Control reported that 8.4% of American children aged 3-17 were at one point diagnosed with ADHD. Roughly 50% of children with attention disorders continue to experience issues as they progress into adulthood, and almost 60% of people diagnosed with these disorders are prescribed medication in an attempt to address the symptoms. It is vital that researchers continue to explore alternative and complimentary methods for solving attention-related disorders, and do not rely entirely on prescription medication to resolve the issue. Additionally, we believe that solutions to these problems have the potential to extend beyond the scope of individuals diagnosed with ADHD, and could be implemented by undiagnosed individuals trying to enhance their level of focus, learning ability, and productivity. It is this ubiquitous issue that we intend to examine.

    Our Solution

    To address this problem, my team of designers, engineers, and researchers has come together to found the Brain Design Laboratory (BDL). The goal of this community is to design, build, test, and rebuild non-invasive neurofeedback platforms that allow users to record environmental conditions over prolonged periods of time, while simultaneously tracking brain activity. In order to explore alternative techniques to addressing ADHD, we want to analyze the data that is recorded by these systems.

    The systems will be comprised of a non-invasive headset that wirelessly sends brainwave data to a mobile phone and a central server, as well as a mobile application that tracks environmental stimuli both actively and passively. Passive stimuli will include variables such as location, noise, and movement, using GPS, audio inputs, and accelerometers. Actively recorded stimuli will include variables such as diet, activities, and moods, and will be input manually by the user. We believe this system will provide invaluable insight into how environmental stimuli correlate to variations in levels of attention. We will reach out to find user groups willing to test the platform. Eventually we hope to be able to provide real-time feedback to the user about how their environment is affecting their level of attention.

    Currently, the potentials of commercial EEG have been used primarily for stationary recording and interaction, and do not serve as a good system for prolonged recording of brain activity. Some of the major shortcomings include comfort and attention to aesthetics. We believe that our diverse team of designers and engineers with experience in neuroscience, electrical engineering, as well as fashionable technology, can provide a new outlook on these problems, creating a system that is both wearable and functional. Lastly, we don’t want to just build technology; we strive to turn BDL into an open community of designers, researchers, patients, parents, and other organizations who are dealing with this problem.

    Rough Budget

    Item Cost Rationale
    20xNeuroskyThinkgearChip $35 each The Neurosky Thinkgear chips (http://neurosky.com/Business/ThinkGearChipsets.aspx) are commercial
    Electronics $2000 Bluetooth modules, Android testing platforms, electrodes, wires,
    Materials $500 Garments, materials, and accessories for designing and building wearable devices. Including fabric, sewing equipment, hats, etc.
    Website $500 We will use this money to establish our validity as an organization so that we can reach out to potential user groups for testing
    Contingency Cost $1000 Miscellaneous expenditures

    Our Qualifications

    I first began trying to address this issue last spring when I designed and built a baseball cap with a sensor for recording brainwaves. To accompany the hat I developed a mobile application for Android that received and recorded the user’s EEG allowing for retroactive analysis of the data. The application also allowed the user to record a variety of moods and daily activities, the intention being to see how quantitative brain activity could be used to find new comparisons between the two. For more information about the project refer to: http://conorrussomanno.me/2012/06/19/interactive-android-application-for-eeg-biofeedback/

    This Fall, with the support of former dean of Parsons The New School of Design’s Art Media and Technology department, Sven Travis, I founded the Brain Design Lab (BDL), a community focused on finding practical applications for brain-computer interfaces. Since it’s inception the community has grown and now has members both inside and outside of the New School community. Some of BDLs most prominent members include a recent graduate of Harvard’s Neuroscience and Education M.S. program, a University of Virginia graduate with a double major in biology and statistics, an award-winning journalist whose son suffers from an incredibly rare undiagnosed neurological disorder, and a graduate of Columbia University’s engineering program.

    Recently we received $1,500 from the New School Student Activities Finance Committee to host a development jam titled Hack-A-Brain. The goal of the event is to explore the potentials of various front-line commercial EEG devices, while introducing New School students to the emerging industry of brain-computer interfacing (BCI). The Brain Design Lab has already connected with a number of individuals and organizations involved in the industry. Now we are looking to find additional support, make new connections, and apply novel design techniques to address problems related to the brain. We want to start by attempting to build user feedback applications for addressing attention disorders such as ADHD.

     

  • ABC No Rio – An Illustrated Short Story Prototype (10/20/2012)

    I collaborated with two other artists, Tharit Firm Tothong and Giselle Wynn, on the creation of this illustrated short story for a class project. The piece pays tribute to ABC No Rio, an art gallery and concert space in the Lower East Side that has been in operation since the early 80s and was very active politically active during the late 80s and early 90s, acting as a sanctuary for society’s misfit demographics, as well as taking a strong stance of opposition to NYC’s heavy gentrification at the time. The piece speculates a fictional narrative from the point of view of a poor musician living in the slums of an overpopulated and depressed urban setting.

    The piece is comprised of unpolished illustrations, done by myself and Giselle, as well as a collection of photographs of authentic artwork from within the walls ABC No Rio itself, taken by Firm. Firm also oversaw the design and layout of the composition.

  • Bull’s Eye – Hand-drawn Animation (9/15/2012)

    This hand-drawn animation is of an archer readying and firing his bow:

  • Futuristic Flyover (8/26/2012)

  • The Locket, Directed by Carillon Hepburn (8/19/2012)

    This amazing short film was written, directed, edited, and starred-in by my inspiring little sister, Carillon Smith (aka Carillon Hepburn).  She did it all in just 3 weeks, during a summer film intensive at Virginia Commonwealth University. The flick is a dynamic, mysterious, and gripping drama that touches on the themes of teen passion and self-discovery.

    [youtube=http://www.youtube.com/watch?v=bH8suw2kXi4&w=560&h=315]

  • Not So New News! (8/19/2012)

    Just found out that my friend Jeremy and I made it into the New School newspaper last May, after being asked what our plans were for the summer! It’s amusing to compare a prior perception of a future level of achievement to an ex post facto critique on the same success state. The “graphic novel” didn’t get written/illustrated, but I’d say that it’s underway. Additionally, it’s looking more and more likely that the Brain Cap will be the primary inspiration for my upcoming thesis. Though, I didn’t do everything I wanted to this summer, I had many unexpected successes. I think that it’s important to to have a plan, but just as important to be willing to deviate from it.

  • Rooftop Chilling w/ Shades (7/25/2012)

    “Life moves pretty fast. If you don’t stop and look around once in a while, you could miss it.” – Ferris Bueller

    And sometimes it looks better with sunglasses on. 🙂

  • Subway Paternity (7/24/2012)

    I took this photo on the way home from class one night as I was riding the 4/5 train south from Union Square. I couldn’t resist stealthily snapping this shot of father and son. After taking the picture, a woman on the other side of me smiled so widely I could see her from the corner of my eye. I turned to her and immediately knew this was the mother of the child. After fumbling over my words, I finally said, “I couldn’t help it; look at them!”

    “Thankfully, I get to every day,” she replied.

  • Lincoln Center (7/23/2012)

    On the NW corner of 65th and Broadway, near the Juliard School.

  • On my way to becoming a cyborg! (7/3/2012)

    5 days ago I got surgery on my right to repair a soccer injury from last October. I voluntarily opted to undergo a new start-to-become-a-cyborg corrective procedure and the x-ray photo below reveals my newly engineered foot.

    My New Cyborg Foot

    Below is an anatomy of my foot with a simple graphical depiction of what the surgeons did to my body. They fused my metatarsal 1 to my cuneiform 1, and screwed my cuneiform 1 to my cuneiform 2.

  • [SHARED] CBCNews Article: EEG Shows Awareness in Some Vegetative Patients (6/26/2012)

    A CBCNews article reveals: “Researchers [from the University of Western Ontario London] have discovered they can detect conscious awareness in some patients thought to be in a permanent vegetative state using an inexpensive EEG device that measures electrical activity in the brain.”

    Graphic from CBCNews

    Credit should be given to principal researcher Dr. Adrian Owen of the Centre for Brain and Mind at the University of Western Ontario; his collaborator, Damian Cruse; and everyone else who is contributing to the research.

    Dedicated to my Canadian viewers/collaborators. Keep up the good work! Also dedicated to my grandma, Carillon Leader, who is in the midst of a battle against Pick’s Disease. Hopefully this research will provide new insight into the possibilities of communicating with loved ones that suffer from dementia or other neurodegenerative complications.

    G-ma and my brother, Alex

  • The Ethics of Interfacing the Brain (6/22/2012)

    On March 7th, 2012 I attended a very enlightening guest lecture at Columbia University that was presented by Rajesh Rao, a computer science professor of the University of Washington.  Professor Rao does extensive research into brain-computer interfacing, working with the Computer Science, Electrical Engineering, Biology, and Neuroscience that goes into the development of the field. In his lecture he brought up many ethical considerations of interfacing the human brain that I will discuss in further detail below.

    When dealing with a technology as new and powerful as brain-computer interfacing, there are obvious and not-so-obvious ethical implications that should not be overlooked.  While the capabilities of this type of application are currently limited due to the crude quality of the data, technology will continue to advance allowing for clearer data to be acquired more easily.  Soon it will not be science fiction to have a portable EEG device that has bi-directional interactivity – in other words, a device that talks back.  The implications of this type of technology lead to numerous potential risks that need to be planned for in order to prevent.  Some of these risks include health and safety, legal issues, abuse of technology, security and privacy, and social impacts.

    Health and Safety

    It is important to make sure that whatever technologies are implemented always make the safety of the user the number one priority.  Certain invasive neurofeedback techniques, such as ECoG, involve surgical implantation of electrodes to provide extremely precise data. These types of procedures should remain within the medical realm and saved for people that need them – not people that want them – until they are better understood. With that said, even non-invasive techniques such as EEG could pose potential risks of physical and emotional dependency and dehumanization. It will be important to make sure that these devices are used for the betterment of mankind.

    Legal Issues

    As far as legal issues are concerned, EEG devices are already being tested in criminal cases for lie detection and stimuli recognition.  This type of use will likely evolve into a prime source of debate in the coming years and should be approached with good moral judgment and meticulousness.

    Abuse, Security, and Privacy

    Another serious issue of concern is the likelihood that EEG devices will eventually be linked to the Internet and thus be susceptible to hacking. As brain-computer interfacing technology progresses, extreme security measures must be taken to ensure that such systems are safe from people with malicious inent. Additionally, as the amount of people that use EEG devices continues to grow, the data that is collected from these users will become increasingly valuable. It will be important to ensure that the users’ anonymity is protected and that the industry the emerges around these technologies remains transparent.

    Social Impacts

    Another important risk to consider is the potential social stratification that could arise from introducing beneficial EEG devices to the everyday person. It will be important to mitigate the likelihood that EEG devices will become a luxury that only the rich can afford.

    Conclusion

    It is most important to understand that the items discussed above only begin to delve into the potential risks of brain-computer interfacing. Development into this field must be undertaken carefully and honestly. Brain-computer interfaces should be developed for the betterment of man, and therefore the essence of humanity must remain resolute as this field continues grow. I strongly believe that an open-discussion-approach to this ongoing journey is necessary, in order for it to be done appropriately.

  • Reshape Outer Beauty – By Carillon Smith (6/21/2012)

    My little sister, Carillon, made this video about girls who struggle with self-image. It’s amazing, touching, and inspirational, and I’m not just saying that because she’s my sister.

    [youtube http://www.youtube.com/watch?v=7wQpnRgmSiQ&w=560&h=315]

    She also has a youtube channel about fashion and style, which teaches girls how to wear clothing, look pretty and feel good about themselves in a healthy way.

  • [SHARED] Post-Planetary Design (Class Blog) (6/20/2012)

    I’m not sure why I haven’t shared the below link yet. It’s skipped my mind for long enough. The site is centered around one of my favorite Parsons D+T courses to date, Post-Planetary Design (yes, I stole the name for one of my blog’s categories).  The course, taught by the great thinker Ed Keller, is a fantastic combination of Philosophy, Technology, Sci-Fi, Religion, Film, Literature, and much more. If you are interested in any of those topics, I urge you pay it a visit. My personal contribution to the class was an exploration on the ethics of brain-computer interfacing.

    Much of the discussion and course content from Post-Planetary Design is inspiring one of my current ventures, a sci-fi graphic novel about the future of mankind that explores the question of what it means to be human.  More on that to come!

    Post-Planetary Design

  • Interactive Android Application for EEG Biofeedback (6/19/2012)

    //–The Code Is On Github!–//

    [vimeo http://www.vimeo.com/41776885 w=500&h=281]

    ABSTRACT

    This post details the research and development of a mobile application which receives and annotates neurofeedback data from a commercial electroencephalography (EEG) device with a single dry electrode. The system is designed to convert any Android mobile phone into a portable storage device that passively records the user’s brainwaves while providing an interface to manually annotate the data with daily activities and moods. This application has the potential to provide numerous benefits to medical fields including but not limited to: neuroscience, psychology, psychiatry, and head and neck trauma. While the medical implications seem to be the most immediately prevalent, an application like this could also provide the everyday person with a better understanding of how their daily routine affects their state of mind.

    Useful Links

    Electroencephalography (EEG), The Frontier Nerds, brain-computer interface (BCI), Arduino, NeuroSky, brainwave, Bluetooth, Google Android, Emotiv, MindFlex, Processing, Java, SD card, neurofeedback, active vs. passive electrode, wet vs. dry electrode, International 10-20 System, The OpenEEG Project, .csv file, baud rate, serial communication, integrated development environment (IDE)

    INTRODUCTION

    This project began as a personal infatuation with brain-computer interfacing after I discovered some fascinating interactive games and applications that people were developing using EEG technology. Neurofeedback, until the early 2000s, had been predominantly used in medical and academic settings, where expensive equipment was used to test subjects with specific neurological and psychological conditions. This paradigm began to change as open-source initiatives like The OpenEEG Project [1] and companies such as Emotiv [2] and NeuroSky [3] started developing commercial EEG hardware platforms that were available to the general public for testing and development. While these technologies are by no means cheap, they do provide a new medium for developers and artists to create projects that interface the neurological biofeedback of the human body.

    Problem Statement

    Today, Neuroscientists, Psychologists, and other physicians use neurofeedback to diagnose, predict, and treat certain neurological and pathological conditions. Some of these infirmities include epilepsy, seizures, mood disorders, and trauma. While research in this field has been conducted since the early 1900s, it hasn’t been until recently that large advancements have been made due to the immense impact of technology on understanding the human brain [4]. In lieu of this, doctors who use EEG in their work are seeing an increasing benefit in working with developers and computer scientists to provide new methods of both retrieving and analyzing biofeedback from the brain. While there will always be a need for running EEG experiments in controlled environments, advancements in technology are enabling the creation of new applications that can provide more portable devices for retrieving, storing, and annotating neurofeedback. Such devices could prove to be invaluable for the advancement of medicine and achieving a more thorough understanding of the human brain as a whole.

    Additionally, the average person leads their daily life with very little understanding of what is actually going on inside of their own head. Most people have a very qualitative view of why they feel certain emotions or experience different moods. What most people don’t realize is that there are quantitative and measurable data that can be retrieved from the brain that can provide better insight into why we feel and act the way that we do. The biggest obstacle that is preventing the average person from knowing more about his or her own brain is the difficulty of providing a non-invasive yet informative neurofeedback system at an affordable cost.

    Objective

    The objective of this project is to provide a starting point for a customizable neurofeedback application that can be tailored to the needs of different doctors, researchers, and individuals. The application will help to provide additional insight into the understanding of the brain – both medically and in a general sense. Furthermore, it will serve as a reference for other developers that aspire to contribute to the field of interfacing the brain. It is important to keep in mind that this is an early iteration of an ongoing design process, and that development f this application will continue after further testing and collection of user feedback.

    PRECEDENTS

    Over the last decade, an increasing number of EEG-related projects have emerged outside of the medical world. They range from open-source collaboration initiatives to commercialized proprietary hardware intended for commercial development of applications. The projects listed below inspired the development of this application and provided invaluable knowledge for its execution.

    The OpenEEG Project

    EEG began to emerge outside of medical and research settings in 2004 when the non-profit organization known as Creative Commons, founded by Lawrence Lessig, launched an open-source EEG initiative called “The OpenEEG Project” [1]. It became the first online forum for open discussion about EEG technology and it has brought together many experienced professionals who are willing to share their knowledge about EEG hardware and software. The website’s homepage states that:

    “The OpenEEG project is about making plans and software for do-it-yourself EEG devices available for free (as in GPL). It is aimed towards amateurs who would like to experiment with EEG. However, if you are a pro in any of the fields of electronics, neurofeedback, software development etc., you are of course welcome to join the mailing-list and share your wisdom.”

    The website provides tutorials on how to build your own EEG devices, as well as, examples of code that manage the intricate signal processing side of the technology. Additionally, anyone has the ability to join the OpenEEG mailing list to receive up-to-date information on advancements in EEG technologies.

    Frontier Nerds: How to Hack Toy EEGs

    In April 2010, a team from NYU’s ITP graduate program comprised of Eric Mika, Arturo Vidich, and Sofy Yuditskaya published a blog post titled “How to Hack Toy EEGs.” It thoroughly documents how they hacked an early EEG toy built by NeuroSky known as the Mind Flex. In the tutorial, they provide a list of commercial EEG devices contemporary in 2010, a brief description of the science behind EEG, sample code, an Arduino library for retrieving and outputting the EEG data, a Processing sketch that visualizes the data channels sent out from the Mind Flex, a corresponding Processing library, and video documentation of the entire process [5]. The project is very well executed and documented, and it served as the primary inspiration for the application that I am currently developing, which uses the same Mind Flex toy to record the EEG data. Figure 2 reveals a generalized diagram of how the Frontier Nerds hacked the EEG data out of the Mind Flex and used Arduino to synthesize the data. My application is an extension of this process and uses their Arduino library to retrieve the signals which are then passed through a Bluetooth device to the Android.

    Necomimi: Brainwave Controlled Cat Ears

    In early 2012, a company called Neurowear designed a fashionable EEG accessory, known as the Necomimi, which uses NeuroSky’s internal signal processing chip. The accessory makes a pair of artificial cat ears react to the wearer’s brain state. As the wearer gets more excited the ears stand up. Conversely, when the wearer is more relaxed the ears sink down [6].

    The device demonstrates the usability of commercial EEG data for real-time interaction. Though this is a very simple demonstration of the data, it serves as a proof of concept for the realm of commercial EEG. Additionally, its immediate popularity is indicative of a shift in society’s opinion on the integration of BCIs into common culture.

    TARGET USER GROUPS

    The main goal for the current iteration of this application is to provide a more portable and scalable application for medical fields that currently use EEG. With that said, the application is customizable and can be tailored for independent research and annotation of neurofeedback data.

    Neurologists and Psychologists

    In the field of Neurology, doctors are already examining correlations between EEG and numerous brain disorders, including addiction and other naturally occurring diseases of the brain. EEG is commonly used to study brain ailments such as multiple personality disorder [7], migraines [8], epilepsy [9], and seizures [10]. With regards to the effects of drugs on the human brain, research has also been done into EEG in the classification and evaluation of the pharmacodynamics of psychotropic drugs [11], cerebral blood flow velocity abnormalities in chronic cocaine users [12], as well as other forms of addiction. Additionally, EEG has been used to study many forms of brain trauma including neuronal injury after brain anoxia [13] and impact-related brain trauma [14]. While neurologists and psychologists continue to learn more about the (brain?), portable and easy-to-use neurofeedback applications like this project will provide new methods of attaining vital information for the advancement of neuroscience.

    The Average Person

    In addition to the immediate benefits that this technology will provide to doctors, there is also a future for personal EEG data tracking. In the same way that doctors use EEG to learn about neurological disorders and how they are triggered, the everyday person may soon be able to use portable neurofeedback devices to augment daily routine by quantitatively juxtaposing common daily activities against personal perception of state of mind.

    TECHNOLOGY

    This project involves the use of a variety of technologies. At the core of the system is the NeuroSky signal-processing chip. The data from this single dry electrode EEG toy is sent to an Arduino using methods described in the hack by the Frontier Nerds (mentioned above). The data is then encrypted and sent through a Bluetooth module that was purchased from Sparkfun [15]. The encrypted Bluetooth packets are received by, parsed, and time stamped by the application that I developed on an HTC Nexus One Google Android mobile phone. The packets are then methodically stored to a .csv file on the internal SD card of the phone. Additional manually selected inputs are also time-stamped and written to the .csv file on the phone’s internal SD card. The .csv file can then be opened in Microsoft Excel to be analyzed and visualized with charts and graphs. The remainder of this section goes into further detail about the technologies involved in this project.

    Figure 1. Flow of EEG data from the Mind Flex to a Bluetooth communication device [5].

    Electroencephalography (EEG)

    EEG is the recording of changes in electric potential outside the skull, caused by fluctuations in brain activity. The electrical activity originates from current flowing between the neurons of the brain. EEG is commonly used to research evoked potentials (EP) and event-related potentials (EPRs). The former requires averaging the data and connecting it to external stimuli, and the latter refers to averaged EEG responses that correlate to more complex processing of stimuli. EGG signals are separated into different bands that are unique in their frequency. These different frequencies have unique distributions over the scalp and also have individual neural significance that relates to different brain activity. While there are more accurate methods of retrieving neurofeedback data than EEG, they are more invasive and require more equipment. These techniques include electrocorticography (ECG), and magnetic resonance imaging (MRI) [16].

    EEG Bands

    Standard EEG band frequencies fall in the range of 0 to just over 100 Hertz. These bands differ between adults and children and their locations are dominant in different regions of the brain. The chart below reveals some of the common EEG bands and their respective frequencies, brain states, and wave types.

    Figure 2. Common EEG Band Chart [16]

    Electrode Types

    Electrodes used in EEG devices have two types of classification that affect the quality of data received by the electrode. The first classification is whether the electrode is “dry” or “wet”. Dry electrodes consist of a dry conductive surface that is placed against the scalp. Wet electrodes are coated with a highly conductive solution, often saline, which significantly increases the clarity of the data [17]. The other classification is whether the electrode is “active” or “passive”. Active electrodes have inbuilt circuitry that amplify the electrical current close to where the signal is picked up from the scalp. Due to the extremely small electric potentials (millionths of a volt) that are recorded during EEG, data can be greatly distorted by even the resistance of common conductive wire. Therefore, active electrodes that are able to amplify the signal early in the system produce a much better resolution. In both of these cases, however, the option that provides the stronger signal is also more cumbersome [1].

    International 10-20 System

    When dealing with multiple electrodes, EEG placement has been standardized to allow for better collaboration between researchers. The standardized system is referred to as the International 10-20 System. The diagram in Figure 3 depicts a top-down perspective of the electrode placement of an apparatus the uses the 10-20 system [18]. Note that this project does not use this system due to the fact that it utilizes a single electrode and in turn does not provide spatial resolution of the EEG signal.

    Figure 3. International 10-20 System of EEG Electrode Placement [18]

    Hardware Used

    I used a variety of hardware in this project that enabled collection and transfer of EEG data.

    NeuroSky MindFlex

    The NeuroSky MindFlex is a proprietary commercial EEG device that uses a single electrode that is both dry and passive to parse a raw neurofeedback signal into 11 channels. The 11 channels are: connectivity – a reading between 0 and 200 (0 being perfect connectivity), attention (a black box value calculated by NeuroSky’s proprietary signal-processing software), meditation (similar to attention), theta, delta, low alpha, high alpha, low beta, high beta, low gamma, and high gamma [3].

    Arduino

    “Arduino is an open-source electronics prototyping platform based on flexible, easy-to-use hardware and software. It’s intended for artists, designers, hobbyists, and anyone interested in creating interactive objects or environments [19].”

    Sparkfun Bluetooth Mate Silver

    This Bluetooth module is designed to be used with the Arduino environment. The module’s modems work as a serial (RX/TX) communication. It can be programmed to stream serial data at baud rates between 2,400 and 115,200 bps. The device is described thoroughly on Sparkfun’s website, which contains schematics, data sheets, and tutorials for the product [15].

    HTC Nexus One Google Android Phone

    The HTC Nexus One is a mobile phone that runs on the Google Android operating system 2.3.6. It was released in 2010, is Bluetooth compatible, has a resolution of 480×800 px with 254 pixels per inch, and contains an internal SD card. Though it is not the most modern Android phone, it was still able to run this version of the EEG application – a good sign for future iterations.

    Software Used

    A wide variety of software was used to develop this application. I worked with Arduino and Processing primarily. Arduino’s IDE is based on C/C++ and is supported by an extensive online website with learning material, examples, and forums [19]. Processing is a large collection of Java libraries that are primarily graphics-based. It too is supported by an extensive website that allows even novice programmers to start developing right away. Processing is very useful when working with Android development because there is an Android mode built into the Processing IDE that allows developers to test applications on an Android emulator that can mimic various types of Android phones. Additionally, developers are able to access the Android SDK and APIs directly from the Processing environment. Certain aspects of the application required writing raw Java code that had not yet been directly translated into processing libraries. These elements of the code included working with Bluetooth, writing to the SD card, and running the application as a background process so as not to interrupt the data stream [20].

    EARLY PROTOTYPE & PLANNING

    This project began in December of 2011 after I first familiarized myself with EEG technology and the field of brain-computer interfacing. It started as a predominantly hardware-based project while I was still getting acquainted with the Arduino environment and replicating the hack detailed by the Frontier Nerds of NYU’s ITP department. In addition to the initial hardware, I created an interface concept design for what would eventually evolve into the current application that this paper details. Before beginning work on the current application, I also conducted a generic survey via Facebook to gather input on what activities and moods the “everyday person” would be interested in comparing to their own quantitative EEG data.

    BrainCap v1.0

    My own work with BCIs began with a project that I dubbed the BrainCap, which eventually became known as BrainCap v1.0 due to later iterations. The device used the Arduino library created by the Frontier Nerds to extract the serial EEG data from the NeuroSky MindFlex. With some simple Arduino code, the system wrote the incoming data to an SD card using an SD Breakout module purchased from Sparkfun. A 9-volt battery was used to power the device, which allowed the user to walk around without any chords attached. Additionally, the devices had built-in buttons and beepers to ensure that the system would start and stop appropriately without deleting the collected data. All of the components of the device were mounted onto a baseball cap that was purchased at a dollar store.

    The device was successful in that it collected and time-stamped data, but there was no built-in mechanism for adding context to the data with external stimuli. I tried doing this by taking extensive notes during sessions when the data was recording, but it proved to be tedious. BrainCap v1.0 can be seen in Figure 4 while some of the data that was collected can be seen in Figure 5. A more thorough explanation of the project and its results can be viewed at the following URL:

    http://www.digitaldistillation.com/pComp/?p=897

    Figure 4. BrainCap v1.0

    Figure 5. External Stimuli Recorded by Hand to Add Context to the Recorded EEG Data

    Application Interface Concept Design

    As the need for a more convenient method for recording external stimuli became apparent, I began to develop concepts for a mobile application interface to accompany the BrainCap. The basic interaction of this interface design was to provide a simple yet comprehensive system for manually inputting both preset and custom activities and moods to contextualize the neurofeedback. Some sketches of the early interface design, which ended up being very similar to the current interface, can be seen in Figure 6.

    Figure 6. Early Interface Design

    To ensure that the interface received some feedback before I began developing it, a simple survey was distributed via Facebook to get input about common daily activities and moods that people would be interested in learning more about with regards to their own EEG data. The survey questions can be seen in Figure 7.

    Figure 7. Facebook Survey for Feedback on Interface Design Features

    CURRENT SYSTEM

    The current iteration of the application achieves the goals that were set in the early stages of the project. The system successfully records both passively collected EEG data from the MindFlex and manually input activities and moods from the mobile application to the internal SD card of the Nexus One Android mobile phone. The BrainCap was modified to include the bare minimum number of components in order to achieve a longer battery life and also to ensure the clean processing of EEG data with the Arduino.

    Portable EEG Device (BrainCap v2.0)

    BrainCap v2.0 consists of the following components: the NeuroSky MindFlex, an Arduino Uno, a long-lasting Ultralife 9v battery, a breadboard, the Sparkfun Bluetooth Mate Silver, electrical wires, and the same baseball cap used in v1.0. The only software written for this part of the project was compiled in Arduino and uses the library created by the Frontier Nerds. BrainCap v2.0 can be seen in Figure 8.

    Figure 8. BrainCap v2.0

    Mobile Application Interface

    The interface of the Android application contains four main tabs: Annotate, Your Brain, Share, and Settings. Currently the only tab that has any functionality is the annotate tab. Within this tab the user is prompted to select from two types of manual inputs – activity or mood. Once either of these is selected, the user is taken to a new window where he or she is able to select from an assortment of various preset common activities or moods. In addition the user has the capability to create custom activities and moods to personalize his or her EEG annotation. Once an activity or mood is selected, the user has the ability to decide whether the input should be turned on or off, selected as an instant event, or retroactively added to record past events. If an activity or mood is turned on, it is then highlighted with a green overlay to indicate to the user that it has been activated. After a selection has been made, the application then writes the appropriate subcategory to the internal SD card on the Android and uses the internal clock of the phone to time-stamp the entry so it can be synchronized with the passively recorded EEG data. Additionally, the interface has a narrow text field at the top of the screen that shows the incoming EEG data packets that arrive once per second. A screenshot of the interface can be seen in Figure 9 below.

    Figure 9. Screenshot of the Application’s Interface

    Data Analysis Techniques

    Once data has been recorded to the internal SD card of the phone, the .csv file that contains the data can be easily sent to a computer via a standard USB cable. That file can then be opened in Microsoft Excel to construct charts and graphs for better interpretation of the data. Figure 10 shows a graph of sample data that was recorded using the application detailed in this project. The graph depicts my brain’s attention and meditation activity as I played a FIFA World Cup final in the Xbox game FIFA 2010 with one of my friends. The data points are averaged over 30 second intervals to provide a more coherent visualization of the EEG. Averaging the data over larger intervals is necessary due to the poor quality of the EEG data that is recorded from a single electrode that is both dry and passive.

    Figure 10. A Graph That Depicts My Brainwaves as I Played a Video Game with My Friend

    CONCLUSION

    The many phases of this project have encompassed a wide range of technologies that have come together to form a unique application with some potentially significant implications for various fields of research. Though the project is still in an early stage, the groundwork has been laid and the application can be easily adapted from this point forward. The testing that has been done so far has provided unique insight into what is possible with a system that uses this type of EEG device – a device that provides crude real-time data that can be averaged over longer intervals to identify clear trends in brain activity. However, the lack of data analysis is one of the major shortcomings of this project. Because I have the only working instance of the system I am unable to get large amounts of data from different test subjects.

    Findings

    At this point no analysis has been done into finding correlations between external stimuli, personal perception of state of mind, and EEG data. This type of research will require an initial user group to test the application. Despite this, the few tests that have been run show clear linear and oscillating trends in brainwaves when averaged over extended periods of time. It was discovered early on that the data is too chaotic when looking at the data points that are received every second. However, when this data is averaged, distinct patterns in brainwave activity can be identified.

    Future Directions

    Moving forward there are many steps that need to be taken.

    Initial User Group

    An initial user group that is willing to test this application needs to be selected in order to get user feedback on the interaction of the application. Additionally, this user group will provide the amount of data that is necessary to start juxtaposing EEG data against daily activities and moods. From there, data analysis can commence to see what types of correlations data of this quality can produce between EEG and other information.

    Interface Additions

    As far as the interface is concerned, a data visualization system will be implemented so that the user can track data in real-time. This data visualization will allow the user to see automated graphs of his or her brain activity without having to upload the .csv file into Microsoft Excel. The data visualization interface? will resemble something similar to the processing sketch (Figure 11) that the Frontier Nerds made to accompany their Arduino library [5].

    Figure 11. EEG Data Visualization Developed by The Frontier Nerds [5]

    Sharing Capability

    The next iteration of this project will also include an option for the user to anonymously share his or her EEG data and inputs with a central database. This database will serve as an aggregation of human brain activity and will provide an invaluable set of data to analyze and use as a baseline to compare to the EEG data of any individual.

    Ethical Considerations

    When dealing with a technology as new and powerful as brain-computer interfacing, there are obvious and not-so-obvious ethical implications that must be taken into consideration. While the capabilities of this type of application are limited due to the crude quality of the data, technology will continue to advance allowing for clearer data to be acquired more easily. Soon it will not be science fiction to have a portable EEG device that has bi-directional interactivity – in other words, a device that talks back. The implications of this type of technology lead to numerous potential risks that need to be planned for in order to prevent their negative consequences. Some of these risks include health and safety, legal issues, abuse of technology, security and privacy, and social impacts.

    ACKNOWLEDGEMENTS

    I’d like to give a special thanks to my Parsons Design + Technology instructors from Spring 2012, Jonah Brucker-Cohen, Katherine Moriwaki, Joel Murphy, and Ed Keller, for providing great mentorship throughout the development of this project. I’d also like to thank Neurosky, Sparkfun, Arduino, Google, and Processing for giving me fun toys to play with. Thank you, my good friend, Joe Artuso, for editing the piece. Lastly, thank you too The Frontier Nerds for giving me a place to start.

    REFERENCES

    I tried my best to give credit to all of the work that I referenced in the development of this project. Please contact me if you’d like me to add/remove anything to/from this blog post. I will not hesitate to do so.

    1. The OpenEEG Project. http://openeeg.sourceforge.net/doc/
    2. Emotiv. http://www.emotiv.com/
    3. NeuroSky. http://neurosky.com/
    4. Electroencephalography. http://www.bem.fi/book/13/13.htm#03
    5. Mika, E., Vidich, A., Yuditskaya, S. How to Hack Toy EEGs. Frontier Nerds: An ITP Blog. http://frontiernerds.com/brain-hack
    6. Necomimi. http://neurowear.com/
    7. Arikan, K., et al. EEG Correlates of Startle Reflex with Reactivity to Eye Opening in Psychiatric Disorders: Preliminary Results. Clinical EEG and Neuroscience 37.3 (2006), 230-234.
    8. Bjørk, M.H., et al. Interictal Quantitative EEG in Migraine: A Blinded Controlled Study. The Journal of Headache and Pain 10.5 (2009), 331-339.
    9. Kennett, R. Modern Electroencephalography. Journal of Neurology 259 (4), 783-789. April 2012.
    10. Bubrick, E.J., Bromfield, E.B., Dworetzky, B.A. Utilization of Below-the-Hairline EEG in Detecting Subclinical Seizures. Clinical EEG and Neuroscience 41.1 (2010), 15-18.
    11. Saletu, Bernd, Anderer, P., Saletu-Zyhlarz, G. EEG Topography and Tomography (LORETA) in the Classification and Evaluation of the Pharmacodynamics of Psychotropic Drugs. Clinical EEG and Neuroscience 37.2 (2006), 66-80.
    12. Copersino, M.L., et al. EEG and Cerebral Blood Flow Velocity Abnormalities in Chronic Cocaine Users. Clinical EEG and Neuroscience 40.1 (2009), 39-42.
    13. Rossetti, A.O. Early EEG Correlates of Neuronal Injury After Brain Anoxia. Neurology 78 (11): 796-802. March, 2012.
    14. Tezer, I., Dericioglu, N., Saygi, S. Generalized Spike-Wave Discharges with Focal Onset in a Patient with Head Trauma and Diffuse Cerebral Lesions: A Case Report with EEG and Cranial MRI Findings. Clinical EEG and Neuroscience 35.3 (2004): 151-157.
    15. Sparkfun. http://www.sparkfun.com/
    16. Budzynski, H., Budzynski, T., Evans, J. Introduction to Quantitative EEG and Neurofeedback: Advanced Theory and Applications. 2nd Edition. Elsevier Inc. 2009.
    17. Wang, Wang, Maier, Jung, Cauwenberghs. Dry and Noncontact EEG Sensors for Mobile Brain-Computer Interfaces. IEEE Transaction on Neural Systems and Rehabilitation Engineering 20 (2): 228-235. March, 2012.
    18. Gilmore, R.L. American Electroencephalographic SocietyGuidelines in Electroencephalography, Evoked Potentials, and Polysomnography. J. Clin. Neurophysiol (11), 147. January, 1994.
    19. Arduino. http://arduino.cc/
    20. Processing for Android. http://wiki.processing.org/w/Android
    21. Rao, R.P.N. University of Washington. Computer Science. March 7, 2012. http://www.cs.washington.edu/homes/rao/

    WANNA GET INVOLVED?!

    I’d be happy to share the graphics, code, and system schematics with anyone that wants to help with application development and/or collection of data. If enough interest is generated, I’d also be willing to post a thorough step-by-step procedure for application/device development with all the necessary assets for the Android App. If you are even remotely interested, don’t be afraid to contact me at conor.russomanno@gmail.com AND comment on the blog (interest is contagious!). The step-by-step will eventually get submitted regardless, but support from you guys would definitely help get the ball rolling. Brainiacs, assemble!


  • Social EEG Study – Request For Help! (4/30/2012)

    Have you ever wondered why you feel the way that you do? Have you ever been interested in seeing exactly how your daily routine affects your moods and emotions? Are you interested in discovering how to “perfect” your routine?  If so, keep on reading.

    My good friend, Wojo, and I are undertaking a cutting-edge research study to use personalized EEG – a non-invasive method of neurofeedback – to quantitatively compare common daily activities with various moods and emotions. Before we get started, we’re interested in getting some feedback from you!

    As we develop the first iteration of this application, we will be providing the user with 2 lists of manual inputs.  The first list will be a group of “common daily activities” while the second list will be a collection of “common moods and emotions”.  Using passively collected EEG data from the user, we hope to discover discrete correlations between these two categories, while also providing quantitative classifications for various moods and emotions – concepts that have traditionally been understood qualitatively.

    If you want to HELP US OUT?!…

    PLEASE SEND (Comment or email conor.russomanno@gmail.com):

    1)     a list of daily activities whose effects you are interested in knowing more about.  Examples include:  exercise, nap, eat breakfast/lunch/dinner, work, read, smoke, drink a coffee, play video games, drink alcohol, etc.

    2)     a list of moods and emotions that you are interested in understanding more quantitatively.  Examples include:  alert, tired, happy, excited, anxious, sad, depressed, focused, motivated, un-motivated, slow, etc.

  • World Trace Center 1 (Psychedelic Sunset) (4/18/2012)

  • NYC Skyline (4/18/2012)

    This is a panoramic shot that I made in Photoshop by stitching together 3 stationary shots (from different angles).  I used a lot of blur, smudge, stamp, and the film grain filter.  If you look carefully you can see a jagged seem down the middle of the photo where the teal of the “left” sky does not perfectly fade into the more royal blue of the “right sky”.  I might try messing around with the picture a little bit more to see if I can get rid of that.  The clouds in the middle were the trickiest part, however.  This was due to the fact that the central sky was at a different angle relative to the buildings in the various photos because of the natural perspective of the camera lens.  If I had overlaid more images in the same angle this problem would have been more easily mitigated, but I didn’t take enough photos at the time.  Using GoogleMaps and the landmarks in the photograph, I estimate that the shot captures about 115 degrees of NYC’s skyline.  Sweet!

    I think it’s a pretty rad shot because you can see World Trade Center 1 under construction (in the “left” sky w/ the two cranes that look like bug antennas) as well as the Empire State Building in the background.

  • nyc_livin’ (4/14/2012)

    [slideshow]

  • [SHARED] iOS interfacing – Frequency Shift Keying (FSK) from Arduino (4/12/2012)

    So, it turns out that I’m probably not going to use the Hijack approach to getting serial data from my Mindflex into my iPhone.  This is mainly due to the fact that I have the majority of the components to build a similar system from scratch.  The difference will be that I will not use the iPhone’s battery to power the system; I will use an external power source.  I intend to follow the steps detailed in this forum discussion that I found on Arduino’s website: Arduino–>FSK—>Iphone (OpenframeWorks)

    Here is a snapshot and the corresponding schematic of the setup of the parts:

    Setup: Arduino > iPhone Audio Jack

    Schematic

    NOTE:  If you don’t mind jailbreaking your iPhone, the setup documented here interfaces the iPhone through the 30-pin dock at the bottom of the phone.  It allows for a faster baud rate and easier access to the iPhone’s power source. Apparently you have to buy the Redpark or Legacy (?) device for ~$60, however.

  • I love this city. (4/11/2012)

  • Robot Man (4/5/2012)
  • Our Future World (4/5/2012)

     

  • BrainSYNC – Neurofeedback vs. Social Media (3/19/2012)

    Check out the post that I made below to read about my ideas of syncing personal EEG feedback with the masses via a smartphone application.

    http://interaction2012.coin-operated.com/?p=547

  • Project Notes (EEG to Mobile App) (2/27/2012)

    Notes on my ongoing EEG project (with semester plan):

  • Preliminary EEG Research (and Brain Cap v1.0) (2/22/2012)

    Introduction

    This posts details my first attempt at building a Brain Computer Interface (BCI), as well as, some of the research I did along the way.  There are a number of methods of retrieving neurofeedback from the brain, but Electroencephalography (EEG) is the least invasive of the known methods.  Other methods, despite producing cleaner and more localized data, require probing into the brain which is a dangerous and expensive process, requiring the assistance of a highly trained professional.  For this reason, EEG is at the forefront of commercial BCIs.  Some avante garde  applications of commercial EEG products include revolutionized gaming interaction, health and physical-state monitors, and abstract augmentation of of art and music.

    The Big Players

    Currently there are a few companies who are pioneering the field of commercial EEG technology.  The two biggest players that I have come across are Neurosky and Emotiv.  These two companies, though obvious competitors, use different styles of retrieving EEG feedback.  Neurosky’s primary and most recent commercial headset, the Mindwave, focuses on neurofeedback via a single dry electrode that makes contact with the middle of it’s user’s forehead.  In contrast, Emotiv’s main product, the EPOC (also a headset), uses 14 saline (wet) electrodes and provides a spatial resolution of the EEG data.  It does this by following the Internation 10-20 Stystem, which is the medical standard for EEG electrode placement when working with multiple electrodes.  While Neurosky offers only a single electrode for raw data extraction, it comes at a much more reasonable price of ~$100 compared to Emotiv’s Developer Package, which costs ~$500.  Regardless, if you intend to leverage either one of these products for personal business development, it’s going to cost a lot more than that.

    In addition to these two companies, there is an initiative known as The OpenEEG Project.  This website is a collection of open-source knowledge provided by people who have done extensive work with EEG technology.  The website details different methods of hardware and software design and also provides links to external related websites.  It is a great resource for anybody who doesn’t want to spend exorbitant amount of money on commercial EEG products but is also trying to figure out EEG from the ground up.

    What I Did

    To start, I got my hands on the Mattel Mindflex, an early Neurosky-licensed product.  The Mindflex uses neurofeedback from the Neurosky chip to control a fan which adjusts the height of a ball up and down, simulating telekinesis and turning it into a game.  After acquiring one of these cool little devices I found the following tutorial (How to Hack Toy EEGs) done by some guys at NYU’s ITP program, which shows exactly how to hack apart the Mindflex.  The post is very well organized and has links to a fascinating data visualization done with processing.  The guys were even nice enough to include the necessary libraries to run it yourself.

    After I got everything that the guys from ITP had done working, I decided to take it a step further by designing a similar apparatus, but one that could be worn without having to be connected to the computer via USB.  My intention was to design something that I could wear over the course of an extended period of time that would passively record my brain data while I was thinking about other things.

    What resulted was a baseball cap rigged with the Mindflex/Neurosky EEG device, an Arduino that routed the data onto a microSD breakout (the same memory device used by most digital cameras) in the form of a .TXT file, and some other buttons and electronic parts to control the start and stop of the system.  With the device, I was successfully able to retrieve hours of my own EEG data and analyze it in Microsoft Excel after the fact.  Because the data from the Neurosky is sent in packets at a rate of 1 ASCII string per second I was able to time stamp the data relatively easily and then able to graph my brain function over time.

    Check out the link below for a more thorough description of the design, testing, and analysis of my device.  Be wary that it was written at ~6am after pulling an all-nighter so I apologize in advance for the parts that don’t make much sense:  My First BCI (EEG Brain Cap).

    My Very Own EEG

  • A Response to Emerging Technologies (2/20/2012)

    This piece is a response to the following articles:

    The evolution of new systems that undermine the physical boundaries of our current world is the theme that stands out when comparing and analyzing these three articles.  Each of these articles identifies emerging synthetic life forms that are redefining the world’s power structures. These life forms are systems that humans – as individuals – can create, but once created have little to no control over.

    In the article Why We Twitter: Understanding Microblogging Usage and Communities by Akshay Java, Tim Finin, Xiaodan Song, and Belle Tseng, the authors identify a new system of communication referred to as micro-blogging. This system has evolved as a result of rapidly advancing Internet technologies in addition to humanity’s slow evolution into and acceptance of a joint physical-virtual existence. In the last 20 years, the emergence of virtual social media platforms has redefined people’s understanding of community. Platforms such as Twitter, Facebook, and Xbox Live have remapped our community of friends and family from a demographic of people within our close physical proximity to a community of individuals that could be anywhere in the world but who share a similar virtual demographic as ourselves. Amidst this evolution there has been a drastic transformation in how information is disseminated from “the source” to “the recipient.” The world in which exist linear channels of information transfer dominated the media no longer exists. Instead information flow resembles more of a spider web pattern where anyone and everyone can and does contribute.  The result of this shift is a world where control of information is virtually impossible.

    In the Wired article Great Wall of Facebook: The Social Network’s Plan to Dominate the Internet – and Keep Google Out, the author, Fred Vogelstein, brings up some very daunting facts about how much control Facebook and Google have over our personal information. Even more foreboding is how much our personal information is worth. After reading this article two things stood out to me.

    First, I found it incredible how one individual, in this case Mark Zuckerberg, could create a system that had the ability to infect the entire planet in a matter of a few years. At this point, it would be impossible to stop the Facebook pandemic.  Even if Facebook were to destroy all of its servers today, Google would immediately fill the void with Google+ – and if not Google+, then somebody else.  The demand for personal information of the masses is just too high.

    The second idea that stood out to me – and this is a personal opinion, not a fact – is the notion that governments no longer monitor and control their citizens.  Instead, it’s the corporations housing our personal information and virtual identities that truly govern us.  In the same way that communication is evolving so is the concept of governance and patriotism. Governments are losing their authority to corporations as a result of new forms of taxation. Corporations are able to tax our virtual identities, properties, and businesses whereas governments can only tax our physical ones. Just a little fact: Apple surpassed the United States government in liquid cash flow in late 2011 (http://www.denverpost.com/business/ci_18590939).  I would be interested to see a study on the distribution of people who would rather be seen wearing an American flag on their T-Shirt compared to those who would rather have a glowing white apple on their laptop.

    I found the third article, Why The Future Doesn’t Need Us, by Bill Joy, to be the most thought provoking of the bunch. Joy raises the issue of the looming threat of human intelligence and the “desire to know” leading to our eventual demise – his biggest concern being emerging genetic, nano-biotechnological, and robotic (GNR) advancements. After reading this article I was disappointed in myself for not having read it in the first 12 years of its existence. I recommend it as a must-read for anybody interested in technology and innovation and who also has an appreciation for ethics and the further existence of the human race. What fascinated me most about this article was how critically Joy examined the existence and importance of the systems of GNR advancements and how keen he was to the volatility of they’re potentials. The viral nature of social media platforms such as Twitter and Facebook are proof of the disastrous capabilities of technologies that attempt to revolutionize fields such as genetics, cells, and robots that augment human life.  It is important that humans proceed with caution before we see the Mark Zuckerberg of genetic engineering create the Facebook of gene-modifications that can’t be reversed.

    Just as a single spark can ignite a container of gasoline, an individual today is able to create a self-replicating system that can infect the entire world. It requires cooperation and careful planning to be able to control and harness these systems to be used for the proper augmentation of human life.

  • Kinematics-to-Color Conversion Game (2/13/2012)

    [VIDEO TO COME]

    I. Introduction

    This project was very experimental in nature.  I wanted to create an interface that uniquely translated the mind’s understanding of one common system onto a very different common system through a simple switch interface.  Both the physics of kinematics and the science of color have always intrigued me.  With this project I attempted to interface the two systems by means of an alternative method that the human mind would not typically think of. It was my hope that this odd translation would provide a new lens for understanding both systems, as well as, shed a new light on the human mind’s perception of a system.  In the end I decided to turn the interface into a game that tracked the player’s progress of how well he or she was able to translate between the two systems of calculus-based kinematics and the RGB color system.

    II. How It Works

    This game assumes that the player or viewer has a basic understanding of the RGB color system in addition to grasp of the calculus-based relationship between position, velocity, acceleration, and jerk (the rate of change of acceleration).  Through 4 clicks of the button (or switch) at the center of the application the system tracks the absolute value of the velocity, acceleration, and jerk of this interaction.  It does this by assuming a “distance” of 3 units – 1 unit for each click after the first one which is supposed to represent the starting line.  It then generates an average velocity (v), average acceleration (a), and average jerk (j) based on the timing between each of the 4 clicks.  These three values are then mapped onto the R, G, and B values respectively.  The algorithm behind the conversion assumes that the user’s 4-click timespan (total time) will be between 0.1 seconds and 30 seconds.

    Method for Calculation

    (Note: the values calculated for v, a, and j are rough averages due to the fact that there are only 4 data points recorded.  Additionally, this is one method of averaging the data but there are numerous other ways these averages could have been calculated.)

    Referenced Variables:  T12 = time between the first and second click, T23 = time between second and third click, T34 = time between third and fourth click; additionally T1 = 0 (first click starts timer), T2 = time of second click referencing the timer that starts with the first click, T2.5 = the time halfway between the 2nd and 3rd click, etc.

    Velocity:  This is calculated as the total distance, 3, divided by total time, T14:  x/t

    v = 3 / (T12 + T23 + T34)

    Acceleration:  This is calculated as the of the average of the local accelerations between click 1 and click 3, and click 2 and click 4.  In other words the change in velocity from T12 to T23 is averaged with the change in velocity from T23 to T34.  I used this method because in order to calculate an acceleration there needs to be a flux in velocity and with the recorded data there are actually two fluxes in velocity (at click 2 and click 3).  Note: t

    a = ((A123 + A234) / 2)

    A123 = (V23 – V12) / (T2.5 – T1.5)    and     A234 = (V34 – V23) / (T3.5 – T2.5)

    Jerk:  This is calculated as the of the rate of change of acceleration between A123 and A234

     j = |(A432 – A321) / (T3 – T2)

    Method for Conversion

    Once the user has flipped the game’s switch 4 times each of the absolute values, or magnitudes, of the above calculations (v, a, and j) are respectively mapped onto a 0-255 range of the red (R), green (G), and blue (B) values of a color.  Initially the system mapped the lowest extremes of negative ( – ) acceleration and negative jerk to values of 0 for G and B respectively.  This was not necessary for velocity-to-R bc it was impossible to generate a negative value for velocity.  After some testing, however, I decided to change the conversion to where it used the absolute values of velocity and acceleration to map to the 0-255 ranges of G and B.  My rationale for this change was that the interaction between the two systems is confusing enough and shouldn’t include variable ranges for (a and j), with both negative and positive possibilites, that map onto the G and B variables that only have positive ranges.  Therefore, as it stands now acceleration values of 0 to 10 are mapped onto G-values of 0-255.  The original method had acceleration values of -10 to 10 being onto G-values of 0-255.

    III. Process

    Concept

    Early Sketches and Calculations

    These sketches and calculations show some of my thought processes in the development of my system to generate the average values for v, a, and j.  This was the first phase of my pseudocode before jumping into Openframeworks.

    This is the first sketch for the layout of the game’s interface.

    Manifestation

    The final aesthetic was done in Photoshop and the back-end coding was done with Openframeworks.  Below is a screenshot and sample code of the variable I used:

    IV. Conclusion

    The game is not too difficult to pick up if you have a good understanding of calculus and/or kinematics, but it is very difficult to master due to the limited number of player inputs.  I found it hardest to produce a deep green color.  This entailed producing a high but constant acceleration without a high velocity or jerk.

    Moving forward, I would like to get this game up on a website so that more people could test it.  Additionally I would like to make it social by adding a high scores database that all players can view and compete for.  In terms of practical applications for this type of conversion, a similar system might be beneficial to do data visualization in industries where calculating jerk is important.  Examples where people consider jerk include: boxing (the higher the jerk, the more devastating the punch), car accidents, etc.

  • Neurotic Moon (1/9/2012)

    This is a photo of the full moon from the patio in my back yard on January 8th, 2012.

    This is the same shot after some cropping, filtering, and pixel manipulation using Camera+ and Photoshop.  I was loosely inspired by my recent work in EEG and brainwave research.  It reminds me of a busy neuron surrounded by dendrites.

  • Rotunda (1/4/2012)

    I completed this design for my brother’s fraternity rush t-shirt at the University of Virginia.  The Rotunda that is depicted here was design by Thomas Jefferson and stands at the center of the UVA campus.  It is a symbol of UVA pride.  I will upload a picture of the t-shirt once they are printed.

  • Crossroads (12/29/2011)

    This is a logo that I designed for my mom’s non-profit job placement office, Crossroads.

    [slideshow]

  • Orbitorbs v2.1 – Solar System Simulator (12/15/2011)

    [vimeo 32869590 w=500 h=500]

    Project Summary

    This project is an extension of Orbitorbs v1.0.  I translated the code that I wrote in processing into Openframeworks, a C++ based programming language.  I added additional features that enabled more user control over the planetary system including:

    • The ability to pause the solar system simulation and edit planet parameters
    • A more intuitive interaction for editing planet parameters
    • The ability to turn on and off a function that links the computer microphone volume input to the strength of the gravitational constant dictating the force between the planets (activate by pressing the ‘e’ key and deactivate by pressing the ‘s’ key). The higher the volume, the higher the g-constant (directly proportional).

    The algorithm uses 2-dimensional matrices to store the x and y parameters of the various planets and it implements Newton’s Law of Universal Gravitation:

    newton

    This project has the potential to be adapted into a new type of learning tool, allowing for a more fun and interactive method for teaching basic principles of physics including angular acceleration, gravitation, ideas of mass and density, and more.

    Orbitorbs v2.1 (openframeworks) from Conor Russomanno on Vimeo.

    The Code

    If you want to play with this application or examine the code, please feel free to grab it from my github.

  • LCD Octopus Animation (to Oahu by The 6ths) (11/20/2011)

    Summary

    This was a project that I did in Physical Computing during my first semester at Parsons.  I used a 2-line by 16-character New Haven LCD Display and an Arduino.  Luckily, I found this great website that provides a hexadecimal generator for creating custom character designs for LCD displays.  The song in the animation is Oahu by The 6ths.

    Animation (Wait for it… waaait for it… :))

    Custom Character Designs

    Arduino Code

    [sourcecode]
    #include <NewSoftSerial.h>

    int rxPin = 4;
    int txPin = 5;

    NewSoftSerial LCD (rxPin, txPin);

    void setup(){
    Serial.begin(9600);
    delay(50);
    LCD.begin(9600);

    LCD.print( 0xFE, BYTE );
    LCD.print( 0x41, BYTE ); // turn LCD on

    LCD.print( 0xFE, BYTE ); // set conrast
    LCD.print( 0x51, BYTE );
    LCD.print( 30, BYTE );

    LCD.print( 0xFE, BYTE ); // set brightness
    LCD.print( 0x53, BYTE );
    LCD.print( 6, BYTE );

    loadCustomCharacters();
    }

    void loop(){

    // LCD.print(0xFE, BYTE); //clear screen
    // LCD.print(0x51, BYTE); //move curs home

    LCD.print(0xFE, BYTE);
    LCD.print(0x45, BYTE); //select curs position

    LCD.print(0x00, BYTE);

    LCD.print(0xFE, BYTE); //move right to adjust for movement
    LCD.print(0x56, BYTE);

    printStretch();
    delay(1000);

    LCD.print(0xFE, BYTE); //move back left to prepare for squish
    LCD.print(0x55, BYTE);

    printSquish();

    LCD.print(0xFE, BYTE);
    LCD.print(0x56, BYTE);

    delay(500);

    }

    void printStretch(){
    LCD.print(0xFE, BYTE);//Set Cursor 00
    LCD.print(0x45, BYTE);
    LCD.print(0x00, BYTE);
    LCD.print(6,BYTE);
    LCD.print(0xFE, BYTE);//Set Cursor 01
    LCD.print(0x45, BYTE);
    LCD.print(0x01, BYTE);
    LCD.print(4, BYTE);
    LCD.print(0xFE, BYTE); //Set Cursor 40
    LCD.print(0x45, BYTE);
    LCD.print(0x02, BYTE);
    LCD.print(0, BYTE);
    LCD.print(0xFE, BYTE); //Set Cursor 40
    LCD.print(0x45, BYTE);
    LCD.print(0x40, BYTE);
    LCD.print(7, BYTE);
    LCD.print(0xFE, BYTE); //Set Cursor 40
    LCD.print(0x45, BYTE);
    LCD.print(0x41, BYTE);
    LCD.print(5, BYTE);
    LCD.print(0xFE, BYTE); //Set Cursor 40
    LCD.print(0x45, BYTE);
    LCD.print(0x42, BYTE);
    LCD.print(1, BYTE);

    }

    void printSquish(){
    LCD.print(0xFE, BYTE);//Set Cursor 00
    LCD.print(0x45, BYTE);
    LCD.print(0x00, BYTE);
    LCD.print(2,BYTE);
    LCD.print(0xFE, BYTE);//Set Cursor 01
    LCD.print(0x45, BYTE);
    LCD.print(0x01, BYTE);
    LCD.print(0, BYTE);
    LCD.print(0xFE, BYTE); //Set Cursor 40
    LCD.print(0x45, BYTE);
    LCD.print(0x40, BYTE);
    LCD.print(3, BYTE);
    LCD.print(0xFE, BYTE); //Set Cursor 40
    LCD.print(0x45, BYTE);
    LCD.print(0x41, BYTE);
    LCD.print(1, BYTE);

    LCD.print(0xFE, BYTE); //Set Cursor 40
    LCD.print(0x45, BYTE);
    LCD.print(0x02, BYTE);
    LCD.print(0x20, BYTE);

    LCD.print(0xFE, BYTE); //Set Cursor 40
    LCD.print(0x45, BYTE);
    LCD.print(0x42, BYTE);
    LCD.print(0x20, BYTE);

    }

    void loadCustomCharacters(){
    headLeft();
    headRight();
    stretchTL();
    stretchTR();
    stretchBL();
    stretchBR();
    squishLeft();
    squishRight();
    }

    //START CUSTOM CHARS
    void headLeft(){
    LCD.print(0xFE,BYTE);
    LCD.print(0x54,BYTE);
    LCD.print(0,BYTE);

    LCD.print(0x0,BYTE);
    LCD.print(0x0,BYTE);
    LCD.print(0x0,BYTE);
    LCD.print(0x0,BYTE);
    LCD.print(0x0,BYTE);
    LCD.print(0x1e,BYTE);
    LCD.print(0x17,BYTE);
    LCD.print(0x1f,BYTE);
    }

    void headRight(){
    LCD.print(0xFE,BYTE);
    LCD.print(0x54,BYTE);
    LCD.print(1,BYTE);

    LCD.print(0x1f,BYTE);
    LCD.print(0x17,BYTE);
    LCD.print(0x1e,BYTE);
    LCD.print(0x0,BYTE);
    LCD.print(0x0,BYTE);
    LCD.print(0x0,BYTE);
    LCD.print(0x0,BYTE);
    LCD.print(0x0,BYTE);
    }

    void squishLeft(){
    LCD.print(0xFE,BYTE);
    LCD.print(0x54,BYTE);
    LCD.print(2,BYTE);

    LCD.print(0x2,BYTE);
    LCD.print(0x19,BYTE);
    LCD.print(0x5,BYTE);
    LCD.print(0x13,BYTE);
    LCD.print(0xe,BYTE);
    LCD.print(0x6,BYTE);
    LCD.print(0x1f,BYTE);
    LCD.print(0xf,BYTE);
    }

    void squishRight(){
    LCD.print(0xFE,BYTE);
    LCD.print(0x54,BYTE);
    LCD.print(3,BYTE);

    LCD.print(0xf,BYTE);
    LCD.print(0x1f,BYTE);
    LCD.print(0x6,BYTE);
    LCD.print(0xe,BYTE);
    LCD.print(0x13,BYTE);
    LCD.print(0x5,BYTE);
    LCD.print(0x19,BYTE);
    LCD.print(0x2,BYTE);
    }

    void stretchTL(){
    LCD.print(0xFE,BYTE);
    LCD.print(0x54,BYTE);
    LCD.print(4,BYTE);

    LCD.print(0x0,BYTE);
    LCD.print(0x0,BYTE);
    LCD.print(0x0,BYTE);
    LCD.print(0x0,BYTE);
    LCD.print(0x1e,BYTE);
    LCD.print(0x6,BYTE);
    LCD.print(0x1f,BYTE);
    LCD.print(0x7,BYTE);
    }

    void stretchTR(){
    LCD.print(0xFE,BYTE);
    LCD.print(0x54,BYTE);
    LCD.print(5,BYTE);

    LCD.print(0x7,BYTE);
    LCD.print(0x1f,BYTE);
    LCD.print(0x6,BYTE);
    LCD.print(0x1e,BYTE);
    LCD.print(0x0,BYTE);
    LCD.print(0x0,BYTE);
    LCD.print(0x0,BYTE);
    LCD.print(0x0,BYTE);
    }

    void stretchBL(){
    LCD.print(0xFE,BYTE);
    LCD.print(0x54,BYTE);
    LCD.print(6,BYTE);

    LCD.print(0x0,BYTE);
    LCD.print(0x0,BYTE);
    LCD.print(0x0,BYTE);
    LCD.print(0x8,BYTE);
    LCD.print(0x7,BYTE);
    LCD.print(0x0,BYTE);
    LCD.print(0x1f,BYTE);
    LCD.print(0x0,BYTE);
    }

    void stretchBR(){
    LCD.print(0xFE,BYTE);
    LCD.print(0x54,BYTE);
    LCD.print(7,BYTE);

    LCD.print(0x0,BYTE);
    LCD.print(0x1f,BYTE);
    LCD.print(0x0,BYTE);
    LCD.print(0x7,BYTE);
    LCD.print(0x8,BYTE);
    LCD.print(0x0,BYTE);
    LCD.print(0x0,BYTE);
    LCD.print(0x0,BYTE);
    }
    [/sourcecode]

  • Drawings (10/29/2011)

    The drawings in this collection were completed at various times over the last ~10 years.  My most common (and favorite) medium is pencil on paper.  In all of my design and innovation I rely predominantly on my skill and passion for an HB pencil on white paper.

  • Orbitorbs v1.0 (Planetary Physics Simulation) (8/21/2011)

    PLAY ORBITORBS!

    I completed this piece during Parsons DT Bootcamp 2011 prior to beginning my 1st year at grad school.  I did it using processing, a Java-based library of functions.

    Demonstration:

  • CyberGRID – NSF-Funded Virtual Collaboration Environment (5/10/2011)

    Introduction

    CyberGRID began as an independent study for professor John E. Taylor of Columbia University’s civil engineering department.  At the time that I approached professor Taylor he conducted a yearly class that involved doing collaborative design projects with 4 other universities around the world.  This collaboration was facilitated via online classrooms in Second Life, a well-known web-based 3D environment for social, commercial, and academic meetups.  With Second Life, Taylor’s class virtually met and worked with other students and professors from the Indian Institute of Technology Madras, the Helsinki University of Technology (HUT), the University of Twente in the Netherlands, and the University of Washington (Seattle).

    Professor Taylor initially asked a good friend of mine, Daniel Lasry, and myself to make alterations to the virtual islands that he was already leasing within the Second Life environment, as well as, attempt to write plug-ins for the Second Life interface to provide his students with customized interactivity.  After researching the capabilities of Second Life development, Daniel and I decided that CyberGRID’s customizability was limited by licensing restrictions that Second Life had in place.  We convinced professor Taylor to allow us to start from scratch and develop a comprehensive and fully customized virtual learning environment using the Unity game development platform, and Maya and Photoshop for asset creation.  The first phase of the project was in the form of an independent study where we familiarized ourselves with the Unity software and began developing a new aesthetic, a new interface, and new functionality based on feedback from users of the previous version of CyberGRID.

    Phase 1 – Early Concepts and Learning (Independent Study)

    During this phase of the project, the other designers and myself were familiarizing ourselves with the Unity development environment.  I had to learn how to optimize 3D models for game design, ensuring that the assets were all polygons and making sure that the faces were all pointed the correct direction.  Below is a scrapped concept render of part of the CyberGRID environment that I created during this early phase of the project.

    Early CyberGRID Environment Concept Render

    Phase 2 – Beta Development (NSF Funding)

    After our team excelled during our independent study we were hired to continue working on the development of CyberGRID over the summer of 2010.  During this time is when the project really took off.  I was responsible for designing and creating an extensive virtual environment, creating/locating a collection of 3D virtual avatars (see my 3D Character Design post for a more thorough description of this process) for the future users of the application, and animating and texturing the characters and environment, and designing some of the UI.

    These sketches are some early sketches of the environment design:

    Early Environment Design

    CyberGRID Environment Concept Art

    Here are some screenshots of the UI and game environment:

    CyberGRID Login Interface

    CyberGRID Environment

    Virtual Meeting Room w/ Conference Table & Screen-sharing

    Phase 3 – Refinement and Testing

    As we progressed into the following school year, we stayed on board and expanded the virtual environment and it’s features.  The following elements were added:

    • 3D sound
    • Personal screen-sharing on a joint virtual screen
    • Avatar customization and animations
    • An explorable Manhattan
    • Annotation of shared documents
    Below is a render of Manhattan (model from Google’s 3D Warehouse w/ my textures) and the Manhattan Bridge, which I modeled from scratch.

    Manhattan

    Here is a screenshot of users interacting with a virtual monitor that is sharing a real-time render of one user who is working in Autodesk Maya – just one of the many powerful system features.

    Virtual Screen-Sharing

    Conclusion – CyberGRID Today

    Currently, the development and use of CyberGRID is being pushed forward by professor John E. Taylor of Virginia Tech’s Engineering Department as he continues his research in virtual learning environments and the psychology of the relationship between human and computer interface.

  • Columbia Manhattanville Bowtie Building Excavation (5/9/2011)

    For this project, I worked with 5 of my classmates in Columbia’s Civil Engineering department to develop the excavation plan and foundation design for the new Bowtie Building being built at 125th and Broadway in Columbia University’s Manhattanvile expansion.  Our civil class was separated into 5 groups which worked on separate aspects of the building.  The other groups were Concrete Design, Steel Design, Project Management, and Green Building Design.  For my group, I contributed to the structural design of the foundation in addition to modeling and rendering a sequence of images that visual detail the process.  Note: this is not actually the building that is being built at the site; our senior class proposed this as our final senior project to our professors.

    Here is the sequence of images that demonstrate our proposed excavation process and our final foundation design:

    [slideshow]

  • CURFC Posters (5/1/2011)

    Here are some posters that I designed for my college rugby club, Columbia University Rugby Football Club (CURFC), while I served as the club’s president.

    That’s me making the tackle in the first one! 🙂

  • The Eye of Big Brother (4/2/2011)

    This 3D rendering (Maya, Photoshop) was an assignment that I completed for an art class at Columbia.  It is a commentary on the rapidly evolving technologies of the book industry as well as a tribute to the book 1984, one of my favorite childhood reads.

    [slideshow]

  • Zombie King (2D Flixel Game) (12/22/2010)

    CLICK HERE to play the game!

    Description

    I designed this top down computer game, Zombie King with a few of my friends while at Columbia.  I worked as the teams primary concept artist and asset designer.  We used Flixel for our game engine, and I used a pencil, paper, and Photoshop for the asset design.  The mechanics behind the game are derived from a narrative where you are a zombie and you must lead a horde of fellow zombies in a war against the humans.

    My Work Involved:

    • Character designs and animation sprites
    • Level Design
    • Concept Art
    • Cover Art
    • Game Mechanics

    Game Art

    Screenshot

    CLICK HERE to play the game!

  • 3D Character Design (8/21/2010)

    Man_1

    Summary:

    This model was initially designed to be part of a custom character database for my CyberGRID project. Developing an entire custom character database of this level of detail posed to be a very time-consuming endeavor.  Thus, the project turned out to be a terrific learning experience on 3D character modeling and texturing, but we ended up purchasing (from turbosquid.com) a line of much less attractive models that were pre-rigged.  Note that this post is somewhat of a walk-through tutorial, but it assumes that the reader has at least a basic understanding of the Autodesk Maya software.

    Resources:

    I used these tutorials as assistance for modeling and texturing.  They go into further detail about the steps involved in both processes:

    3D Character Modeling: http://www.creativecrash.com/tutorials/real-time-character-modeling-tutorial#tabs

    Texturing a 3D Character:  http://www.3dtotal.com/index_tutorial_detailed.php?id=825#.TzQYQExWrDY

    Process:

    Step 1) I started by drawing a front-view symmetrical sketch of the male anatomy, as well as, a profile sketch that matched the scale of the front view drawing (below).  The rear-view drawing was used as a reference once I need to texture the model.

    Step 2) In Maya, I created simple cylindrical polygons with 8 axis subdivisions (varying numbers of spans depending on the part of the body) and scaled the vertices of the polygons to match the various major limbs (arm and leg) of one half of the body.  I only modeled half of the body so that I would be able to mirror what I had done to create a symmetrical mesh.  I used the front and side sketches above as Image Planes in the front and side windows respectively.  For the torso I used half of a cylinder (deleted the faces of the other half).  Once the arm, leg and half-torso were finished I sewed them together, combining the meshes using the Merge Vertex Tool.

    Step 3) I then used the same technique to model the fingers, thumb, and palm of the hand on the side of the body that I had already modeled.  After it was finished I combined the hand mesh with the rest of the body and then used the Merge Vertex Tool to close the gaps between the meshes.

    Step 4) After this I undertook a similar process for the foot but didn’t put as much detail into it under the assumption that the foot would go into a shoe once the character was rigged and animated.  I then duplicated the half of the body (without the head) and mirrored it to produce a symmetrical body:

    Step 5) I then used a similar process to create the head of the model.  This process was more complex than the body parts due to the topographical abnormalities of a human head (nose, mouth, eyes, ears).  It required more frequent use of the Split Polygon and Insert Edge Loop tools.

    Step 6) Once the model was complete, I used the Cut UVs tool to separate the UV map of the model into flat sections that would be easier to illustrate in Photoshop.  To do this I tried to make all of the UV “seams” in less visible areas (i.e. under the arms, sides of torso, inside of legs).  A good technique is to make the seems along heavy muscle contours – areas where it looks ok to have an abrupt change in color.  I then exported the UV Map and used it as a reference (overlay) to digitally produce the texture in Photoshop.  This process takes a good amount of tweaking because of the counterintuitive nature of drawing a 3D picture on a 2D surface.

    Step 7) I then found a generic eyeball texture from the Internet and mapped it onto two spheres within the head.  In addition, I created a mesh for the hair and used a generic hair texture that I also found on the web.  I then rendered the model using the built-in Mental Ray renderer in Maya.  Most of the rendered images use a standard directional light; the ligher render uses a Physical Sun and Sky (Indirect Lighting).  Here are some of the final renders:

    What’s Next) Next, I want to rig and animate the character, something that I have some experience doing (see my Emperor’s Champion post).  I also want to finally give the guy some clothes!  After that, I plan to make a female counterpart to this model and then sell both of them on a 3D model database like Turbosquid.

  • CURFC Nude Rugby Calendar (11/15/2009)

    While acting as the president of CURFC, my college rugby club, I coordinated the design and mass-printing of a “pseudo-nude” Men’s Rugby Calendar.  The goals for the creation of this calendar were to generate both campus publicity for the club and revenue to help pay for our tour to Paris and Milan in 2010.  The calendar was partly sponsored by C-Spot, an exotic magazine founded at Columbia.  A professional photographer, Harley McGrew, was hired to take and edit the photos.

    If you look at the slideshow below, you’ll understand why the club sports office at Columbia did not receive this initiative well.  I decided not to include interior photos of the calendar other than my own just in case any of my rugby comrades intend to go into politics later in life.  As far as my own political aspirations are concerned, if there’s anything that I learned while serving as CURFC’s president, it’s that it’s better to apologize than to ask for permission.  I’m a world-class damage control expert after that job.

  • Emperor’s Champion (5/14/2009)

    Champ_turnTable_2

    Project Summary

    This 3D animation was inspired by Warhammer 40K, one of my greatest childhood hobbies that I still do today. Warhammer 40K is a complex table-top game that requires each player to assemble and paint their own army.  My primary army, which can be seen in the attached slideshow, is a Black Templars army led by the Emperor’s Champion (the character that is seen the 3D animation).  I won’t go into too much detail about the fiction behind the game but I will say that the Warhammer 40K universe is expansive and the game is for people who love art as much as gaming.  Every players army is entirely unique due to the fact that there are no pre-painted models.  It is the only game that I have played that provides as much for my artistic cravings as it does for my desire to play.

    This animated short was created using Maya and Photoshop.  It was the first animation that I ever did during my early explorations into the field of 3D and computer graphics at Columbia.  This clip helped land me the job of teacher’s assistant for Jose Sanchez’s Engineering Graphics class for my last 4 semesters at Columbia.

    The Model

    Emp_Maya_2

    My hand-painted (acrylic) Warhammer 40K Black Templars army:

    [slideshow]

Talks & Workshops

Contact

Your Name (required)

Your Email (required)

Subject

Your Message