2010-01-24

noisecode: vowels and digraphs

After abandoning the typographic method for assigning 28 noises to 26 alphabet + 2 punctuation signs, I felt lost and rather helpless about moving forward with my noisecode project. I considered phonetics as a key to the problem, but that would need me to develop at least twice as many noises (and likely to produce 3-element variables instead of current 2-element ones), as well as abandon my idea of creating a written alphabetical code.

As written realm provided to little rules for me to base noisecode on and phonetics offered to many of them, I decided to search for a solution in between. Ultimately, it is vowels and digraphs that should be credited for solving the problem.

Vowels
There are 5 vowels in English alphabet (A, E, I, O, U), as well as 2 semi-vowels (W, Y) depending on the word they appear in. In total there is 7 letters which act as vowels, which is defining a syllable. As noisecode is based on a set of 7 noises composed into unique pairs, I decided to assign one particular noise to become typical for vowels only. I chose mains hum, because even when accompanied by other noise it is still well head and distinguishable. Thereby, I hope to strengthen the specific language rhythm that will be heard in texts translated to noisecode.

Digraphs
"A digraph is a pair of characters used to write one phoneme (distinct sound) or a sequence of phonemes that does not correspond to the normal values of the two characters combined". (source: Wikipedia)

There is 41 English language digraphs which I used as rules in noisecode:
ae / ai / au / aw / ay / cc / ch / ci / ck / dg / ea / ei / eo / eu / ew / ey / gm / gn /
ie / kn / mb / ng / oa / oe / oi / ou / ow / oy / ph / ps / qu / rh / sc / sh / si / th /
ti / ue / ui / wh / wr.
Apart from that I used 5 English trigraph rules: igh / ous / sch / ssi / tch.
The rule is a formal requirement for both noise pairs coding a digraph to have at least one common elemental noise, e.g. TH digraph consists of T and H which both are coded by two noise pairs consisting of one common element (grey noise) and two other noises being the second element in each pair (brown noise in T, white noise—H).
The only exception to this rule is OUS trigraph, where there is no common noise for all three letters, however each pair of subsequent letters has the rule applied: mains hum is the common element in OU-, pink noise provides the rule for -US.

Whether vowels and digraphs rules would prove to be viable solutions to my problem with the method of assigning noises to letters, this is what next translations into noisecode will show. I should create some samples shortly.

IFs: excercising creative approaches and media

What if machines could read and reastour emotional states?

What if machines had its' own emotions to communicate with humans?

What if machines objectified humans in the same way we personify machines?

What if machines reacted to us personifying them?

What if we could translate non-material and obscure digital content
(e.g. Cellular Automata) using physical and familiarly analog objects?

What if CA rules were defined by specific physical world properties?

What if blogosphere or social networks could be translated into CA?

What if machines were not reliable, could be mistaken or even cheat us?

What if machine and human became integrated on an organic level in which
machine's blueprint became part of human's DNA and development of both
was interdependent?

arduino: traffic lights



During Arduino workshop I developed a traffic lights set with 3 different modes controlled by an ambient light sensor and a potentiometer:
• day-light mode—green light and red light flashing phases are the same length—active when reading from the light sensor are above a certain threshold;
• night-light mode—green light phase is twice as long as the red light phase—active when reading from the light sensor are lower than the threshold (sensor gets covered with a finger in the video);
• alert-mode—only the yellow light flashes—active when reading from the potentiometer is below an assigned value.

The whole piece is quite simple, but proved quite useful to understand how to actually write code myself. The code can be viewed >>here<<

2010-01-20

first things first

Due to some transcendental reasons the opening of this blog was chaotic and regardless of my multiple interests to which I dedicate this space.
So first things first.

My current work is currently based around two major areas of interest:
• coding language;
• artificial life.

Coding language—this is a continuation of my 1st term answer to the 'tone and noise' brief. The purpose of this brief was: 'to explore how we can push and experience one quality of language through another and thereby extend what communications can be and what it can do'. Since November, when I produced my answer and crits took place, some radical improvements have been made which affected the very core of the project.
I will document its development by tagging it with: noisecode.

Artificial life—is a broad discipline where computer science, design and natural sciences overlap. This seems to be my favourite cocktail for which I have been searching for quite a while and just discovered it on the MACD course at CSM. This blog is mainly devoted to documenting my activity in the field of A-life which is going to define the context of my final project at CSM as well.
All posts relevant to this area will be tagged with: A-life.

I believe these two areas of my interest share at least two common features.
One is strongly rooted in my very personal approach to design:
developing a working system,
the second one has been always attracting me to design:
design as a process of translation.
With the help of this blog I want to trace the development of my two initial ideas and dream of a happy-end when both will ultimately fuse at one point.
If this happens, I will apply for a phd at MIT.

2010-01-19

anticipating future



These are my predictions regarding what can happen in the future in terms of: robotics, wireless communication and virtual societies. The predictions are made with use of four different methods of anticipating future:

• self-fulfilling prophecy—what we believe that would happen is exactly what will happen (according to Robert K. Merton);

• historiosophy (philosophy of history)—thesis + antythesis = synthesis (according to Hegel);

• finding dominant progress trends—method mastered by K. Marx and today used by A. Negri and M. Hardt;

• extrapolation—projecting current development patterns into the future and defining what will need to change on that basis.

2010-01-18

creative approach—————medium

This is a list of creative approaches and related media that define my area
of interest in digitality (under constant reformulation):

A-life—————generative graphics
A-life—————game/process design
A-life—————spatial design
linguistics—————sound design
communication design————graphic design

2010-01-12

3 maps


A map of fields of my interests.



A map of creators relevant to my interests.



Anticipating future of the fields of my interest.