The new canvas tag in HTML5 is really interesting me. Perhaps its because I have never liked or warmed up to Flash all that much (too proprietary for my liking), but the possibilities for it are exciting. Especially with Processing.js, which ports the ease and functionality of Processing onto the web (no more applets!). There are some exciting things going on, such as this recent one that visualizes Twitter feeds.
I've had the same design for the site for awhile. Seems like a good time for a redesign. Things will be in flux for a bit while I fiddle.
And move to LA. Things will be in flux while I redesign the site and move to LA.
Ok, I am finally getting around to documenting some work. This is the work that will be going into the show this week and _ Quarterly. It's a continuation of my previous work with text cut up, this time using news feeds from major news sources as the source, capturing and recontextualizing the contemporary zeitgeist.
Here is a little project that I did recently. It is a Processing sketch that draws a map based on how often a country is mentioned in major media outlets (Such as the New York Times, Guardian UK, Al Jazeera, Reuters, AP). The brighter the country, the more mentions in the media.
A note about the applet, click the 'approve' button that will pop up (I have to sign the applet for security reasons) and give it a moment to do it's thing, it's doing a lot behind the scenes!
I wrote a new Processing sketch today (I love the inspiration that time off affords) that takes color information from each pixel in an image, and uses that to create sound. I extracted the red, blue, green, hue, saturation and brightness of each pixel, and created a sine wave for each value. Together, they create interesting harmonies, a sort of audible version of my last project. There are two versions that I am sharing. The first is the original version, that uses Supercollider to generate the sound. Since it does so, I can't post it on the web, but the sound is much better. You can download the package here. The second uses a Processing friendly, and web friendly, library, Minim, which doesn't sound as good (I can't get rid of the pops and clicks when doing sine waves for some reason), but I allows me to post the sketch on the web, which I have here. Enjoy!
Update: By the way, it sounds much better on speaker set-ups with subwoofers, as there are some really low frequencies that would come out distorted on smaller, laptop speakers.
For one of my classes, I wrote a Processing sketch that does an web image search for keywords from the class, such as "California Ideology", "canon", "code", "commons", "mass culture","media", "privacy", "public good","reductionism","representative","retro-futurism","secrecy" and "sousveillance". It then takes takes the images, and weaves the pixels together, sort of like a tapestry. The code is here.
I have been meaning to post this for awhile. Among other things, I recently wrote a Processing sketch that uses Carnivore to sniff the packets on a local network, then does a whois search on the IP address to look up the physical address attached to them. The sketch then displays the address, and plots it on a map. The more packets that come from a location, the brighter the dot.
Since this is a Processing sketch, I could have done an embedded applet for this entry, but Carnivore requires some changing of permissions on the local machine to do the packet sniffing. And well, you can't do that from a web page.
The code for this is here, but I will warn you now that it is undocumented. And it comes with no warranty, use at your own risk, etc. Enjoy!
Each horizontal line corresponds to a horizontal line of a frame from a video sequence. So, this is 360 pixels high = 360 frames from a sequence. What is shown here are the frames scanning down, so that the first frame of this are all of the first lines of the frames, the second frame of this is the second line from all of the frames, and so on and so forth.
Made with Processing.
Here is a test render/animation of some of the things that I have been doing lately. Click the image to view the video. The link might take a while to load, its about 12megs or so, so give a sec. But it only took about 12 hours to render 30 seconds of footage on my little iBook G4.
If you are interested in the source code, let me know and I will send it to you. I would post it, but I have been constantly working on it and tweaking it, and by the time I post it, I will have made some change to it. So just let me know and I will pass it on to you.
The past week or so, I have been working with Processing and POVRay as part of a group collaborative project. I haven't worked with either program before, but you can see the results up top. I use Processing to generate random shapes, which then passes them POVRay to render out. I am having to use POVRay, as that is what we are going to be using eventually to make videos.