Machine Learning News Hubb
Advertisement Banner
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us
Machine Learning News Hubb
No Result
View All Result
Home Machine Learning

A turn towards textual practice and GAN-informed design. | by Egmontas Geras | Sep, 2022

admin by admin
September 8, 2022
in Machine Learning


I will not pretend to show or adhere to any typically organic development in these texts. Recently, I had a revelation on my desire towards what I’ve been referring to as procedural-design and thus, perhaps more plainly, the use of node-based workflows as methods for drawing. There is a comfort in the compartmentalisation of design-steps into analogous boxes filled with (usually) singular actions that then lead to, or at least point towards, an overall logical whole. By consistently knowing that the boxes are shut, safe and the path trod is visible, one can branch without fear, often die and occasionally bloom. These analogies overlap of course. They are the methods applied and experienced daily to control and remind oneself of the fluid/non-fluid states of creative process. Here and now, however, on this digital canvas I find myself in a very much uncomfortable state: can I treat this exercise as a discrete box in a set of many? Designed to be re-designed; to branch, die and bloom?

Today’s mental flow-state suggests a return to the GAN (Generative Adversarial Network) fork of research. Originally — and this is pre hype of spring/summer text-to-image generative AI chatter on social-media, image-to-image GANs were my gateway interest to AI and machine learning. This visually dominant asphyxiation by images of peculiar character and appeal eventually catalysed intentions to understand and acclimatised towards a deeper, digital resonance. It feels like: learn me, use me. I had experienced GANs as almost throwaway, buzzword objects in certain computational-design-&-theory contexts. Generative imaging appears to be a desired design tool; AI (in any form) is a desired appendage to sexify design method; data is necessary, data is cool, labelling data is really un-fucking cool. Generic GAN conclusions arrived at would include: can some*thing* draw for you?; can some*thing* store knowledge and in turn, apply existing knowledge to new situations?; can some*thing* reveal ‘new’ relationships? On the contrary, it feels as if there is a flat fascination and an underlying aspect of gambling in the now popularised text-to-image, closed ecosystem +UI processes.

“I want something. I’ll try to describe it. Give it to me.”

text2img Prompts (left: Midjourney; right: DALL-E 2).

For a design practice dealing with vision, img2img adversity currently radiates greater appeal. A primary attraction: I can draw both sides of dual-image datasets. In other words, datasets can be designed, learnt from, and iterated upon. This is fascinating, especially amongst the seemingly widespread acceptance and utilisation of boring data (‘a cat is a cat type data’). Self-implementation on the back of existing repos allows the ‘body’ of an AI to be laid bare and flayed. We can infiltrate, observe and tinker. A hacky but inherently didactic practice. We can even pull on our heartstrings — to watch something grow and learn is beautiful. Or perhaps I’m a romantic. Manipulable data, manipulable code and a designed output — the object labelled an AI becomes a graspable plaything to push forward an imaging discourse: an extended branch for computer-aided vision and computational design.

I’d like to briefly and self-consciously unpack a first attempt at implementation of the pix2pix GAN (whilst glossing over a boatload of info of future/past relevance — this is for another time). I had been toying with complex dynamic systems, in this case: cellular automata for their implicit capacity to store data whilst dynamically shifting states over time, given an inherent ruleset and set of conditions for life (or one could say — dynamics). These key characteristics immediately imply that an automaton can be described in multiple ways; before, after, or during simulation. My automata were developing — taking on three-dimensional form, storing relational data, ageing, and inheriting energy… (amongst other in-progress features). A thought emerged: take a before and predict an after.

“There is an observed, underlying kink for prospective design systems.”

Early Training Iterations — beautiful errors describing potential dynamic systems.

Can we model a prospective GAN that reads input data used to initialise the system and outputs a system that has been active a certain amount of time into the future? Hypothesis: given the nature of an automaton, the relationship should be calculable. We can already generate countless initialisations and run dependant simulations that produce results (albeit abstract). Before and after: a dual-image relationship is utilised to capture & describe the state of the system; in this example a single, two-dimensional slice of a volumetric system. The generated dataset includes:

GAN img0 (real A): a slice describing the initial state of energy (distribution and magnitude) of the system.

GAN img1 (real B): a slice describing the automaton’s state, 100 simulation epochs in the future.

GAN img2 (fake A): a prediction of an automaton’s state, 100 epochs in the future, given an initial energy distribution of the system at 0 epochs.

GAN Training Iteration 009 (left-to-right img0, img1, img2).
GAN Training Iteration 100 (left-to-right img0, img1, img2).

A sliver of poetics… the three images work at a per-cell, per-pixel resolution. The image analogy is remarkably close to intended (true) form in this context, where cell ≈ pixel. Every new, predicted data point is a cell and its condition in the future… Our GAN generates a journey through time. Even the inaccurate model predictions are slices of an automaton that could hypothetically turn back on… and play its life out.

GAN Training Iterations (200 total epochs, 74 snapshots).

That’s enough for now. There are multiple, charged strands being teased here. Full transparency, I had to get something out the door. I nearly abandoned multiple segments for fear of lack of detail and descriptive rigour. But then… thoughts are thoughts and my desire is for them to fly, even if naively constructed. It (this process), is organic in the sense of following a path with a mean vector in a particular direction. This path leans, meanders & breaks; this path implies stopping to smell the multiplicity of flora with buds sprouting distances away and occasionally overhanging a deep puddle or cliff that one falls down and clambers back up again. I am under a calm guise with a tickling excitement to share. I’m dabbling, learning as I go — no doubt, mistakes are being made. Play.

A last thought. The blank canvas effect is particularly strong here in an unpractised medium. I have managed to make habitual a daily, open text document for scribbling (typing) at various points in time. This freeflow state is helpful. Starting the day by opening it up is helpful. The text has no intent other than to braindump — in particular in times of excess anxiety or lack of clarity of thought. Of course it also has the benefit of storing ideas — although this is more inefficient for the lack of organisation, no labels or search functionality or any relation to tidiness and thus replay value… Somewhat ironically there may be a necessity to ‘hashtag’ themes amongst brain dumped digital notes. A more robust thought structure is being explored via other methods… there is a correlation where more coordinated tends towards less free, less frivolous, less flippant thought. No matter, practice.

egmontas

GAN Training Iterations: A journey through time and the poetics of watching some*thing* learn.

Credit:

The work referred to in this text was made possible with the help of this repository for CycleGAN and pix2pix in PyTorch (code written by Jun-Yan Zhu and Taesung Park, supported by Tongzhou Wang): LINK to paper on pix2pix. Initial implementation in Houdini was made possible with the help of videos by Chris Kopic and Entagma. Lastly, thanks to the explanatory texts of Daniel Shiffman in The Nature of Code.





Source link

Previous Post

A Practical SQL Question for Data Science Interviews | by Aaron Zhu | Sep, 2022

Next Post

ML goes after chemistry and material sciences -highlights of a review of interest to everybody using and developing models | by LucianoSphere | Sep, 2022

Next Post

ML goes after chemistry and material sciences -highlights of a review of interest to everybody using and developing models | by LucianoSphere | Sep, 2022

Sigmoid and SoftMax Functions in 5 minutes | by Gabriel Furnieles | Sep, 2022

Machine learning to tackle climate change | by Salvatore Raieli | Sep, 2022

Related Post

Artificial Intelligence

Dates and Subqueries in SQL. Working with dates in SQL | by Michael Grogan | Jan, 2023

by admin
January 27, 2023
Machine Learning

ChatGPT Is Here To Stay For A Long Time | by Jack Martin | Jan, 2023

by admin
January 27, 2023
Machine Learning

5 steps to organize digital files effectively

by admin
January 27, 2023
Artificial Intelligence

Explain text classification model predictions using Amazon SageMaker Clarify

by admin
January 27, 2023
Artificial Intelligence

Human Resource Management Challenges and The Role of Artificial Intelligence in 2023 | by Ghulam Mustafa Shoaib | Jan, 2023

by admin
January 27, 2023
Deep Learning

Training Neural Nets: a Hacker’s Perspective

by admin
January 27, 2023

© 2023 Machine Learning News Hubb All rights reserved.

Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Privacy Policy and Terms & Conditions.

Navigate Site

  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us

Newsletter Sign Up.

No Result
View All Result
  • Home
  • Machine Learning
  • Artificial Intelligence
  • Big Data
  • Deep Learning
  • Edge AI
  • Neural Network
  • Contact Us

© 2023 JNews - Premium WordPress news & magazine theme by Jegtheme.