Menu

State of the ART

In the state of the art, there is a collection of a number of projects which are taking AI and machine learning into the creative field, and pushing the boundaries of creativity and investigate the implications of AI on society, art and culture, including the software, platforms, research, etc.

AIArtists.org

AIArtists.org is acommunity of artists exploring the impact of AI on art, culture and society. In their website showcases pioneering artists who are using Artificial Intelligence to push the boundaries of creativity and investigate the implications of AI on society, art and culture. (Check out here!)

AI Meets Design

AI meets Design is a website which is building a bridge between the disciplines of AI & design and exploring how to design human-centered AI applications. It also provides design toolkit which is a set of tools for each step of the design (thinking) process to help designers turn AI into social, user, and business value. It’s an invitation to designers and innovators everywhere to design with and for machine intelligence to create human-centered applications and meaningful user experiences. (Check out here!)

Machine Learning for Musicians and Artists - Kadenze

July 2019 | By Rebecca Fiebrink

This is an online course which was taught by Rebecca Fiebrink in Kadenze. It is a creative machine learning course for artisits and musicions to learn fundamental machine learning techniques that can be used to make sense of human gesture, musical audio, and other real-time data. The focus will be on learning about algorithms, software tools, and best practices that can be immediately employed in creating new real-time systems in the arts. (Check out here!)

Wekinator - The machine learning tool teaching in the course

The Wekinator is free, open source software originally created in 2009 by Rebecca Fiebrink. It allows anyone to use machine learning to build new musical instruments, gestural game controllers, computer vision or computer listening systems, and more. It allows users to build new interactive systems by demonstrating human actions and computer responses, instead of writing programming code. (Check out here!)

IDEO’s AI Ethics Cards

July 2019 | By IDEO

IDEO’s AI Ethics Cards are a tool to help guide an ethically responsible, culturally considerate, and humanistic approach to designing with data. The deck is made up of four core design principles and ten activities, all meant for use by teams working on the development of new, data-driven, smart products and services. (Check out here!)

Experiments with Google (AI Experiments)

Experiments with Google is a platform made by Google Creative Lab since 2009 with a collection of experiments communicating ideas behind different technologies. The website includes a section on "AI Experiments” which is a showcase for simple experiments that make it easier for anyone to start exploring machine learning, through pictures, drawings, language, music, and more. (Check out here!)

There are some relevant projects in AI creativity:

Teachable Machine operation online interface
Sketch-rnn predicting possible endings of various incomplete sketches

Magenta

Magenta is an open source research project exploring the role of machine learning as a tool in the creative process. It is distributed as an open source Python library and JavaScript API for using the pre-trained Magenta models in the browser, powered by TensorFlow. (Check out here!)

There are some relevant projects from Magenta:

NSynth Super

March 2018 | By Magenta, Google Creative Lab

Making music using new sounds generated with machine learning.

It’s a machine learning algorithm that uses a deep neural network to learn the characteristics of sounds, and then create a completely new sound based on these characteristics.

Rather than combining or blending the sounds, NSynth synthesizes an entirely new sound using the acoustic qualities of the original sounds—so you could get a sound that’s part flute and part sitar all at once. Since the release of NSynth, Magenta have continued to experiment with different musical interfaces and tools to make the output of the NSynth algorithm more easily accessible and playable.

NSynth uses deep neural networks to generate sounds at the level of individual samples. Learning directly from data, NSynth provides artists with intuitive control over timbre and dynamics, and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer.

NSynth is an algorithm that can generate new sounds by combining the features of existing sounds. To do that, the algorithm takes different sounds as input. (Check out here)

Objectifier Spatial Programming

May 2017 | By Bjørn Karmann

Train objects in your environment to respond to your behavior.

Objectifier Spatial Programming (OSP) empowers people to train objects in their daily environment to respond to their unique behaviors. It gives an experience of training an artificial intelligence; a shift from a passive consumer to an active, playful director of domestic technology. Interacting with Objectifier is much like training a dog - you teach it only what you want it to care about. Just like a dog, it sees and understands its environment.

With computer vision and a neural network, complex behaviors are associated with your command. For example, you might want to turn on your radio with your favorite dance move. Connect your radio to the Objectifier and use the training app to show it when the radio should turn on. In this way, people will be able to experience new interactive ways to control objects, building a creative relationship with technology without any programming knowledge. (Check out here!)

Nvidia GauGAN

2019 | By Nvida

GauGAN creates photorealistic images from segmentation maps, which are labeled sketches that depict the layout of a scene. Artists can use paintbrush and paint bucket tools to design their own landscapes with labels like river, rock and cloud. A style transfer algorithm allows creators to apply filters — changing a daytime scene to sunset, or a photorealistic image to a painting. Users can even upload their own filters to layer onto their masterpieces, or upload custom segmentation maps and landscape images as a foundation for their artwork. (Check out here)

Project Dreamcatcher

2016 | By Autodesk

Generative design is a design exploration process. Dreamcatcher is a generative design system that enables designers to craft a definition of their design problem through goals and constraints. This information is used to synthesize alternative design solutions that meet the objectives. Designers are able to explore trade-offs between many alternative approaches and select design solutions for manufacture. The system allows designers to input specific design objectives, including functional requirements, material type, manufacturing method, performance criteria, and cost restrictions. The software explores all the possible permutations of a solution, quickly generating design alternatives. It tests and learns from each iteration what works and what doesn’t. (Check out here)

RunwayML

July 2019 | By RunwayML

RunwayML is a platform for creators of all kinds to use machine learning tools in intuitive ways without any coding experience. It is a powerful and easy-to-use application that makes machine learning more accessible and inclusive to creators. It lets anyone with a computer start to explore and create using the latest AI and machine learning models. (Check out here)

AI Generated Images / Pictures

Stylizing the photo by Deep Dream Generator