Many of us use Instagram to document our latest meals, but illustrator and art director David Schwen has taken things a step further with his #pantonepairings series of photos. The images are a playful take on paint chips, only instead of paint they feature combinations of food, like peas and carrots or cookies and milk.
“Computing/Drawing With a Vintage Pen Plotter” by Carl Lostritto
Modern drawing experiments with older output technology creating abstract art:
This is the database of “Computing/Drawing With a Vintage Pen Plotter,” a project by Carl Lostritto. Drawings are organized by method, series, and run using the following syntax: Method-Series-Run. A “method” is an algorithmic approach to controlling the pen plotter and is the most general way to organize these drawings. Within each method, a “series” refers to a specific python code and/or plotter configuration. A run refers to one drawing within the series. Whether the drawing is re-plotted or generates a series of drawings, the run identifier keeps track of their production order over time.
The collection can be found here, with some examples of animations demonstrating techniques as well as videos performing the mechanical drawings.
Building up on augmenting space tools: AR dynamic meta-tagging.
Paste augmented-reality video graffiti on the streets
Technology being developed for annotating animated material at locations using augmented reality - via New Scientist:
Using the AR apps available for smartphones or tablets, anybody can overlay digital text, video and graphics onto the physical world for others to see later. Most major cities are teeming with these digital annotations. You just need to identify a tagged location using your smartphone’s map, and watch through the camera using an AR app. Hey presto, a video or animation will then be overlaid on the scene.
Yet if somebody wants to annotate a place with video that they’ve filmed themselves, today’s apps are constrained. They can only overlay a YouTube clip, say, in its original rectangular shape. Now Tobias Langlotz of Graz University of Technology, Austria, and colleagues have designed software that can cut a person or an object out of a video, so that they alone can be pasted as a digital overlay. The idea is to make virtual human guides that could offer city tours or how-to demos, as well as enhancing AR games.
Langlotz and colleagues used a computer-imaging technique called foreground-background segmentation to identify the required foreground object - usually a person. So a user would film a video, then simply point to the object they wanted to extract. The software would do the rest. In a demo, they filmed a skateboarder doing a jump, and showed how he could be pasted onto a street scene. When the app “sees” the environment, it can replay the person in the right place, skating along the ground, for example.
You can find out more and watch a video at New Scientist here
Awesome post from Archello:
Turn on the music-
Photographer Stephen Ferry has spent ten years documenting the ongoing internal armed conflict in Colombia. In his recently-published book, Violentology: A Manual of the Colombian Conflict, Ferry presents a comprehensive look at this incredibly complicated and brutal conflict with the use of his own photographs, historical imagery and text.
Ferry sat down with LightBox to narrate a video tour of the new book.
Read the story and watch the video here.
Brainless Slime Molds Shed Light On The Evolution of Memory
“We have shown for the first time that a single-celled organism with no brain uses an external spatial memory to navigate through a complex environment,” said Christopher Reid from the University’s School of Biological Sciences.
…“Results from insect studies, for example ants leaving pheromone trails, have already challenged the assumption that navigation requires learning or a sophisticated spatial awareness. We’ve now gone one better and shown that even an organism without a nervous system can navigate a complex environment, with the help of externalized memory.”
The research method was inspired by robots designed to respond only to feedback from their immediate environment to navigate obstacles and avoid becoming trapped. This “reactive navigation” method allows robots to navigate without a programmed map or the ability to build one and slime molds use the same process.
When it is foraging, the slime mold avoids areas that it has already “slimed,” suggesting it can sense extracellular slime upon contact and will recognize and avoid areas it has already explored.
…“We then upped the ante for the slime molds by challenging them with the U-shaped trap problem to test their navigational ability in a more complex situation than foraging. We found that, as we had predicted, its success was greatly dependent on being able to apply its external spatial memory to navigate its way out of the trap.”