Monday, 14 March 2016

(Aldous EMS-30-02) Your First Choice Humanoid Synthetic Replicant. CS-INDUSTRIES 'Making Superior People'



Year: 2030
Bilineal Divison: D902FDF90
Log Number: 453532778
Prototype Name: ALDOUS
Model Number: CS-INDUSTRIES (EMS-30-02)
Serial 984205483

The Future is now, we are the future: CS-Industries has invested, Billions of pounds, countless man hours, meticulous development and construction via some of the top minds in the fields of advanced robotics. The result of which has produced a breakthrough in humanistic synthetic intelligence and construction. Aldous is more reliable than your friends, your colleagues and any electrical / mechanical devices you have at home. We do not consider Aldous as a robot, in fact here at CS-Industries we are currently in the process of getting our latest model (EMS-30-02) legal status under the synthetic human rights act of 2028 This would mean that Aldous will integrate with our society as a human and not a robot. 

It is important that we recognise the improvements that Aldous can make for our world and varied society's. Security: Aldous is bullet proof and has been tested with close range explosive projectiles from a mg42's, psg1 and hand guns with spore mag rounds. He has also passed all close proximity grande and mine tests. This makes Aldous the perfect allies on the battlefield. Civilian: Aldous is equipped for civilian purposes, he has an adaptive memory function which allows him to learn at a geometric rate making him perfect for everything from complicated tasks like fixing your car to simple everyday labours like doing the dishes, taking the garbage out and making dinner. Safety: Aldous abides by the Azimov's three laws of robots: 

1; robot may not injure a human being or, through inaction, allow a human being to come to harm. 

2.A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 

This framework has become integral in creating synthetic replicants that understand and appreciate the delicacy of life and respect within today's contemporary futurist society. Here at CS-Industries we have learnt from our past mistakes and the tragedies that followed the release of EGOR V.2 are now a thing of the past. That is why every Aldous model comes with its own life time protection warranty and in the unlikely event of malfunction or harm our tactical revo team will be dispatched and full compensation of 1,000,000,0000,000 Galactic Dollars will be presented to the owner if such malfunction is proven by our team of experts. 

Aldous has servos that produce torque 600 times the strength of the average human being permitting him to perform tasks that would be impossible for the average person. This means your workforce can be reduced without affecting your companies productivity. We do not see this as taking away work from the average Joe, but on the contrary making the work place a safer environment. CS-Industries are currently in the process of hiring 500,000 new staff members from maintenance to administration. The world needs Aldous but here at CS-Industries we need real people like you. We Are The Future. 

.............................................................................................................................................................

Abstract of Theoretical Work


Animatronic characters are the physical electro-mechanical emulation of natural or conceptual living beings. The predominant differentiate between robots and animatronics resides is in the simulation of organic aesthetics: A robot for example can be a simplistic metal cube, displaying no symbolic reference to living / organic design. Whereas, animatronics via definition present simulation of animalistic kinaesthetic operations and esthetical materiality. In this context, Androids and mechanical humanoid replications can be considered a derivative of animatronics as they exhibit external organic simulation correlating to the natural humanistic paradigm. However, there is a tendency to perceive animatronics as somehow detached from the fundamental concept of robotics; a delineation that deals with only the surficial mimicking or puppetry, therefore neglecting internal substance. This study argues that the elementary proposition of animatronics is not only at the core of contemporary robotics, but furthermore, the very foundation that the modern day robot movement was founded. The logic for this contrast has potential grounding in the terminology ‘animatronic’ a nomenclature formulated from the Walt Disney Corporation appellation (Audio-Animatronic) of the 1960’s. However, the concept of simulating organic function via mechanical device has roots in 3rd century inventions under the classification automata. There is little division in definition between the two terminologies, suggesting that the primary meanings remain singular. Therefore, it is possible to devise that ‘animatronics’ and ‘automatons’ are one in the same device; objectified and classified at different time periods leading to diffusion. This positioning presents possible advancement of animatronics in film via advancements in the scientific field of robotics. Thus, is the future of filmic effects a purely spatial virtual dimension? or will we see advancements in robotics breathe life back into the animatronics industry? 

Beginning with Tom Gunning’s theoretical model of cinema as a machine for creating optical visceral experiences, the core proposition of this study stipulates that pure CGI characters no longer have the ability to accurately simulate consciousness and materiality, or to meet the expectancies of the modern day cinematic audience. It has been claimed that the movement towards Hybrid systems (Motion Capture / Live action integration) provides a form of mediation between actuality and virtuality, adding depth 'soul / consciousness' and a kinaesthetic grounding of external operations in an attempt to solidify and reify the virtual image into something organic. However, it is suggested here that hybrid systems have problematic issues concerning inaccurate approximation of: surficial reflection, portrayal of additional appendages, incomplete character formation (interaction / performance) and encapsulating / staying true to an actor’s performance during editing. The imprecision of these elements become increasingly apparent over time - especially at close proximity, where it becomes discernible via the evolving critical eye of the average modern day cinematic observer.


This projection positions Hybrid systems not merely as mediation between physical reality and holographical dimensions but as a means of returning to the greater substantial and grounded animatronic character systems. Modern animatronic characters / puppets exhibit greater detailed aesthetic verisimilitude and organic simulative, external and internal operations at close proximity in comparison to the most advanced CGI and Hybrid systems as they are augmented via the parameters of the physical world. This research explores a possible return to animatronic special effects in the future of film as the primary medium for character creation, overtaking CGI and other virtual hybrid systems which lack the ability to propagate visceral optical experiences, fine detail / nuance and genuine/responsive characters to meet the evolving critical expectations of the cinematic observer. Technological advancements in animatronic interactive control systems allow accurate tracking of movement, autonomous extemporaneous expressions (programmable level), voice recognition: recording / response technology, exact precision of kinetic functions with meticulous coordination and the ability to continually repeat sequences of action. In addition to these properties there is potential postproduction value via adaptation (Interactive Rocket RaccoonGuardians of the Galaxy 2014 promo: Tetsushi Okuyama). Further reinforcing this theoretical position, the film, Harbinger Down (2015) became the first successful publicly backed Kickstarter campaign for a cinematic feature to exhibit animatronic characters as the primary special effects medium (3,066, backers pledging $384,181). The major Hollywood production, Star Wars: The Force Awakens (2015) has demonstrated a return to practical and animatronic special effects over the predominant modern orthodox virtual progression, grounding the possibility of an animatronic renaissance.

Sunday, 13 March 2016

Walkthrough of Aldous the robo - animatronic systems, controls and functions.


The video below demonstrates how Aldous the animatronic character functions, operates via scripture and levels of control and command. I really wanted to make this video accessible to all so I have made sure to use non technical terminology and expand on the more complex elements of the project. Every part of the video can also be viewed in my reflective blog entries so if there is anything that you may feel needs to be expanded on or explained it is just a case of locating the blog entry for that element and it should answer your query. I am going to include the source code for the robot in this entry to visually demonstrate the different levels of control that are explained in the video. 

The objective of this video is to give the viewer an idea of the scale, purpose, functionality and relation to theoretical work. It is really difficult to get a true idea of the size of and presence of the robot without actually standing in front of it and interacting with the model. However, I have tried my best to encapsulate this perspective in the videos. 


In the spirit of open source sharing and trying to get as many people involved in animatronic character systems as possible, I have decided to include the scripture of the project, please credit if you decide to use this in your own project.

..............................................................................................................................................................

Arduino Script
#include
int c=0;
int pos = 0;
int talkmode=0;
int oldtalkmode=0; long rfac;
long mpos; int eyedel=0;
int pose =140;
int poslip =90;
int eyeposh=57;
int eyeposhinc=1;
int posbot=90;
//int stopy=90;
Servo myservo; // create servo object to control a servo
// a maximum of eight servo objects can be created
Servo myservo2;
Servo myservo3;
Servo myservo4;
Servo myservo5;
Servo myservo6;
Servo myservo7;
Servo myservo8;
int talkcount=255;
//eventually use audio stop trigger
int doclosemouth=0;
int turnmode=0;
int turnmode2=0;
int globmode=1; //1 is move about 2 is eyetwitch
int wcount;
int pcount;
int mystart=1;
int notalkcount=0;
void setup(){
Serial.begin(9600);
wcount=0;
pcount=0;
pinMode(1,OUTPUT);
pinMode(8,OUTPUT);
pinMode(5,OUTPUT);
pinMode(4,OUTPUT);
pinMode(13,OUTPUT);
pinMode(11,OUTPUT);
pinMode(10,OUTPUT);
pinMode(12,OUTPUT);
pinMode(3,OUTPUT);
// pinMode(A3,OUTPUT);
// pinMode(A4,OUTPUT);
myservo.attach(1); // attaches the servo
myservo2.attach(8); //left right
myservo3.attach(5); //up down
myservo4.attach(3); //eyes left and right
myservo5.attach(4);
myservo6.attach(12);
myservo7.attach(11);
myservo8.attach(10);
int oldtalkmode=0;
// myservo3.attach(A3);
// myservo4.attach(A4);
} void loop(){
// if(talkmode==1){
// pose=140;
// poslip=90;
// posbot=100; // }
// if(mpos>131){
// notalkcount++;
// }else{
// notalkcount==0;
//}
// Serial.print(notalkcount);
// if(notalkcount>2000){
// talkmode=0;
// oldtalkmode=0;
// notalkcount=0; // } // }
int t=random(2000);
int pos=random(400);
if(t>1998){ if(pos>195){ int v=25+random(60);
int pos2=140+random(60);
myservo4.write(v);
myservo5.write(pos2);
} }
while(Serial.available()>0){ int din=Serial.read();
if(talkmode<9) oldtalkmode=talkmode;
if(din<8) talkmode=din;
if(din>8 && din<70) turnmode=din;
if(din>70 && din<201) turnmode2=din;
// if(din==201 && talkmode==0) {
// globmode=2;
// mpos=134;
// }
// if (globmode=1);
// Serial.print("TM="+talkmode);
// if(globmode==1){
// eyeposh=57;
// myservo4.write(eyeposh);
// } }
globmode=1;
//force it into this mode if(globmode==1){
//movement if(talkmode==1){
//wait for start of talking
if(mystart==1){
int dropout=0;
while(analogRead(3)==0){
updatestuff();
} mystart=0; // Serial.println("hello"); }; //count pauses
if(mystart==0){ int v=analogRead(3);
// Serial.print("v:");
// Serial.print(v);
// Serial.print(" ");
if(v==0){ pcount++; if(pcount>10){ mystart=1; } }else{ doclosemouth=0;
pose=140;
poslip=90;
posbot=100;
if(pcount>5){ pcount=0;
wcount++;
doclosemouth=1;
// Serial.println(wcount);
pcount=0;
// pose=140;
// poslip=90;
// posbot=100;
// mystart=1;
} } //? } //? //talking // delay(10+random(2));
pose=140+random(60);
poslip=2+random(32);
posbot=50+random(30);
//delay (100);
myservo6.write(pose);
myservo7.write(poslip);
myservo8.write(posbot);
rfac=random(100);
if(rfac<45){
// mpos=random(130);
mpos=99+random(50);
delay(60+random(40));
// delay(random(11));
}
}else{
//core bit if(doclosemouth==1){ mpos=134;
pose=140;
poslip=90;
posbot=100;
// myservo8.write(100);
//myservo6.write(140);
// myservo7.write(90);
// myservo8.write(90);
} } int r=analogRead(5);
if(r<1000){ mpos=133;
pose=140;
poslip=90;
posbot=90;
// myservo8.write(100);
talkmode=0;
} if(talkmode==0){
// myservo6.write(140);
// myservo7.write(90);
// myservo8.write(100);
pose=140;
poslip=90;
posbot=90;
mpos=132; // close mouth } if(turnmode>9 && turnmode<70){ //left/ right myservo2.write(turnmode); // Serial.print("TM="+turnmode);
// talkmode=oldtalkmode;
} if(turnmode2>70){ //left/ right int sv=turnmode2-70;
myservo3.write(sv);
// Serial.print("TM="+turnmode);
// talkmode=oldtalkmode;
} if(mpos>130 && talkmode>0) myservo4.write(57); //up/down here myservo.write(mpos);
}//end of globmode 1;
if(globmode==10){ //never = 10 so disables // int v=analogRead(3);
/// if(v>20){ // globmode=1;
// talkmode=1;
// } updatestuff();
//start of eye loop eyedel++;
if(eyedel==1000){ eyedel=0;
myservo4.write(eyeposh);
eyeposh=eyeposh+eyeposhinc;
if(eyeposh==90 || eyeposh==25) { eyeposhinc=eyeposhinc*-1; int d=250;
d=d+random(1750);
delay(d); } } } }
void updatestuff(){ int t=random(2000);
if(t>1998){ int v=25+random(60);
myservo4.write(v);
int pos=random(400);
if(pos>195){ int pos2=140+random(60);
myservo5.write(pos2);
} } // if(mpos>131){ //notalkcount++;
// }else{ // notalkcount==0;
//} //if(notalkcount>2000){ // talkmode=0;
// oldtalkmode=0;
// notalkcount=0;
// } while(Serial.available()>0){ int din=Serial.read();
// if(talkmode<9) oldtalkmode=talkmode;
// if(din<8) talkmode=din;
// if(din==1){ // globmode=1;
// talkmode=1; // eyeposh=57;
// myservo4.write(eyeposh);
// } if(din>8 && din<70) turnmode=din;
if(din>70 && din<201) turnmode2=din;
// Serial.print("TM="+turnmode);
// if(din==201 && talkmode==0) globmode=2;
// if(din==202) globmode=1;
// if(globmode==1){ // eyeposh=57;
// myservo4.write(eyeposh);
// } } if(turnmode>9 && globmode==1){ //left/ right myservo.write(135);
// myservo8.write(stopy);
myservo2.write(turnmode);
// Serial.print("TM="+turnmode);
// talkmode=oldtalkmode;
} if(turnmode2>70 && globmode==1){ //left/ right int sv=turnmode2-70;
myservo3.write(sv);
myservo6.write(140);
myservo7.write(90);
myservo8.write(90);
// Serial.print("TM="+turnmode);
// talkmode=oldtalkmode; } }
..............................................................................................................................................................
Processing Script
import processing.serial.*;
/* A little example using the classic "Eliza" program.
Eliza was compiled as a Processing library, based on the java source code by Charles Hayden: htp://www.chayden.net/eliza/Eliza.html
The default script that determines Eliza's behaviour can be changed with the readScript() function. Intructions to modify the script file are available here: http://www.chayden.net/eliza/instructions.txt *
/ max is 67 on sweep
import codeanticode.eliza.*; Serial myport; int dummy=8; int sendx=0; Serial myport2; // neck motor int drawskeleton=0; //1 / 0 int lastsentx=-1; int lastsenty=-1;
int archsenty=-1; int archsentx=-1;
int eyecount=0; //used for sampling movement
Eliza eliza; PFont font; String elizaResponse, humanResponse; boolean showCursor; int lastTime; PImage bg1a;
int closestValue; int closestX; int closestY; int lastcx; int lastcy;
float targx; float targy;
//simple openni import SimpleOpenNI.*;
float globx, globy;
float oldglobx, oldgloby;
SimpleOpenNI context; color[]
userClr = new color[]{ color(255,0,0),
color(0,255,0),
color(0,0,255),
color(255,255,0),
color(255,0,255),
color(0,255,255)
}; PVector com = new PVector();
PVector com2d = new PVector();
//end simpleopenni
void setup() { size(1200, 786);
println(sketchPath);
//si context = new SimpleOpenNI(this);
if(context.isInit() == false) { //println("Can't init SimpleOpenNI, maybe the camera is not connected!");
exit();
return;
} // enable depthMap generation context.enableDepth();
// enable skeleton generation for all joints
// context.enableUser();
background(200,0,0);
//end si bg1a=loadImage("bg1.jpg");
//println(Serial.list());
myport=new Serial(this, Serial.list()[5],9600);
//myport2=new Serial(this, Serial.list()[??????],9600);
// When Eliza is initialized, a default script built into the
// library is loaded. eliza = new Eliza(this);
// A new script can be loaded through the readScript function.
// It can take local as well as remote files.
eliza.readScript("scriptnew.txt");
//eliza.readScript("http://chayden.net/eliza/script");
// To go back to the default script, use this: //eliza.readDefaultScript();
font = loadFont("Rockwell-24.vlw");
textFont(font);
printElizaIntro();
humanResponse = "";
showCursor = true;
lastTime = 0; }
void draw() { while(myport.available()>0){ int dat=myport.read();
/// println(""+dat);
} eyecount++;
//println("EYECOUNT:"+eyecount);
if(eyecount>=30){ println("diffx="+abs(closestX-lastcx)+" diffy="+abs(closestX-lastcy));
// println(archsenty+" "+closestY+" "+archsentx+" "+lastsentx);
//if(archsenty==-1) archsenty=lastsenty;
//if(archsentx==-1) archsentx=lastsentx;
if(abs(closestY-lastcy)<30 && abs(closestX-lastcx)<30){ // archsenty=lastsenty;
// archsentx=lastsentx;
// for(int lop=0;lop<100;lop++){ println("WOULD GO INTO EYE TWITCHING");
// myport.write(201);
lastcx=closestX;
lastcy=closestY;
}else{
//if(abs(lastsenty-archsenty)>45 && abs(lastsentx-archsentx)<45){ println("WOULD GO BACK TO MOVEMENT");
lastcx=closestX;
lastcy=closestY;
// myport.write(202);
// } } eyecount=0;
} image(bg1a,0,0,width,height);
//background(102);
if(globx!=oldglobx){ sendx=int(abs(globx));
// sendx=8+(sendx/8);
oldglobx=globx;
// myport.write(sendx);
} if( sendx>9 && lastsentx!=sendx){ //println("sending neck positions"+sendx);
if(abs(lastsentx-sendx)>35) eyecount=145;
myport.write(sendx);
// UNCOMMENT FOR PEOPLE TRACKING lastsentx=sendx;
} //println("neck y:"+int(globy));
if(random(10)>4){ int outy=70+int(globy);
if(outy>200) outy=200;
//println("outy="+outy);
//HERE IS THE LINE SENDING THE NECK Y COORDINATES if(lastsenty!=outy){ if(abs(lastsenty-outy)>35) eyecount=145;
myport.write(outy);
//println("OUTY:"+outy);
lastsenty=outy;
} }
//DUMMY SWEEP STARTS HERE if(random(10)>2){ // myport.write(dummy);
////println("DUMMY:"+dummy);
//dummy++;
//if(dummy>170) dummy=9;
//myport.write((70+dummy));
////println("neckyyyyyyyy"+(70+dummy));
} //DUMMY SWEEP ENDS HERE
fill(255);
stroke (111);
text(elizaResponse, 30, 450, width - 40, height);
fill(0);
int t = millis();
if (t - lastTime > 500) { showCursor = !showCursor;
lastTime = t;
} if (showCursor) text(humanResponse + "_", 30, 600, width - 40, height);
else text(humanResponse, 30, 600, width - 40, height);
// simpleopennidrawmethod();
closestpixdrawmethod();
}
void closestpixdrawmethod(){ closestValue = 8000;
context.update();
// get the depth array from the kinect int[] depthValues = context.depthMap();
// for each row in the depth image for(int y = 0; y < 480; y++){
// look at each pixel in the row for(int x = 0; x < 640; x++){
// pull out the corresponding value from the depth array
int i = x + y * 640;
int currentDepthValue = depthValues[i];
// if that pixel is the closest one we've seen so far
if(currentDepthValue > 0 && currentDepthValue < closestValue){
// save its value closestValue = currentDepthValue;
// and save its position (both X and Y coordinates)
closestX = x; closestY = y;
} } } float scfac=67.0/640;
globx=(closestX*scfac)*.7;
targy=(closestY*scfac)*3.2;
globy=globy+((targy-globy)/8);
// globy=targy;
// //println(globx);
//draw the depth image on the screen
// image(kinect.depthImage(),0,0);
// draw a red circle over it,
// positioned at the X and Y coordinates
// we saved of the closest pixel. // fill(255,0,0);
// ellipse(closestX, closestY, 25, 25);
}
void keyPressed() { if ((key == ENTER) || (key == RETURN)) { //println(humanResponse); //first scan for keywords
elizaResponse = eliza.processInput(humanResponse); //println(">> " + elizaResponse); String[] out={elizaResponse};
saveStrings("/Users/carlstrathearn/Desktop/test.txt",out);
delay(10);
//println(sketchPath+"/data/applescriptbridge.app");
open(sketchPath+"/data/applescriptbridge.app");
myport.write(1);
humanResponse = "";
} else if ((key > 31) && (key != CODED)) {
// If the key is alphanumeric, add it to the String
humanResponse = humanResponse + key;
} else if ((key == BACKSPACE) && (0 < humanResponse.length())) { char c = humanResponse.charAt(humanResponse.length() - 1);
humanResponse = humanResponse.substring(0, humanResponse.length() - 1); } }
void printElizaIntro() { String hello = "Hello.";
elizaResponse = hello + " " + eliza.processInput(hello);
//println(">> " + elizaResponse);
}
void simpleopennidrawmethod(){ context.update();
// //println("gx="+globx+" GY="+globy); // draw depthImageMap //image(context.depthImage(),0,0); if(drawskeleton==1) image(context.userImage(),0,0); // draw the skeleton if it's available int[] userList = context.getUsers(); for(int i=0;i
vertex(com2d.x - 5,com2d.y);
vertex(com2d.x + 5,com2d.y);
endShape(); fill(0,255,100);
text(Integer.toString(userList[i]),com2d.x,com2d.y);
} } } }
void drawSkeleton(int userId) {
// to get the 3d joint data /* PVector jointPos = new PVector(); context.getJointPositionSkeleton(userId,SimpleOpenNI.SKEL_NECK,jointPos);
//println(jointPos); */ //
//println(SimpleOpenNI.SKEL_HEAD);
if(random(100)>97){ PVector jointPos = new PVector(); context.getJointPositionSkeleton(userId,SimpleOpenNI.SKEL_HEAD,jointPos);
//println(jointPos.x);
//println(jointPos.y);
//println(jointPos.z);
globx=jointPos.x;
globy=jointPos.y;
}
if(drawskeleton==1){ context.drawLimb(userId, SimpleOpenNI.SKEL_HEAD, SimpleOpenNI.SKEL_NECK); context.drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_LEFT_SHOULDER); context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_LEFT_ELBOW); context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, SimpleOpenNI.SKEL_LEFT_HAND);
context.drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_RIGHT_SHOULDER); context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_RIGHT_ELBOW); context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, SimpleOpenNI.SKEL_RIGHT_HAND);
context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_TORSO); context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_TORSO);
context.drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_LEFT_HIP); context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_HIP, SimpleOpenNI.SKEL_LEFT_KNEE); context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_KNEE, SimpleOpenNI.SKEL_LEFT_FOOT);
context.drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_RIGHT_HIP); context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_HIP, SimpleOpenNI.SKEL_RIGHT_KNEE); context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_KNEE, SimpleOpenNI.SKEL_RIGHT_FOOT); }
}
// ----------------------------------------------------------------- // SimpleOpenNI events
void onNewUser(SimpleOpenNI curContext, int userId) { //println("onNewUser - userId: " + userId); //println("\tstart tracking skeleton"); curContext.startTrackingSkeleton(userId); }
void onLostUser(SimpleOpenNI curContext, int userId) { //println("onLostUser - userId: " + userId); }
void onVisibleUser(SimpleOpenNI curContext, int userId) { ////println("onVisibleUser - userId: " + userId); }
..............................................................................................................................................................
Apple Script
set theVoices to
{"Alex", "Bruce", "Fred", "Kathy", "Vicki", "Victoria"}
set thePath to (path to desktop as Unicode text) & "test.txt"
set the_file to thePath
set the_text to (do shell script "cat " & quoted form of (POSIX path of the_file))
set the clipboard to the_text
set theSentence to the clipboard
log (theSentence)
say theSentence using ("Bruce") speaking rate 140 modulation 5 pitch 15
on readFile(unixPath)
return (do shell script "cat /" & unixPath)
end readFile
To use voice recognition simple activate the function on your apple mac from the system menu and out put it to the processing / eliza chat interface instead of using a keyboard. (You will need to set up the microphones in the kinect sensors for this to work)
..............................................................................................................................................................

Feelings: I feel pretty happy with the outcome of the videos, I had issues with compression and the first run produced file sizes in excess of 50 gb but I have now sorted this out and I have files that are of a workable volume and are currently uploading for submission.

Evaluation: It has been quite fun doing the filming elements of this section of the project, it was nice to be able to involve my friends in this and we had a good laugh at the same time. The editing was a bit of a long process as it involved lots of cut scenes and blending. In the promo video for the robot it did cross my mind to use green screen / suit techniques for the set, however, my time scale did not allow for this. I did enquire about getting the equipment but I needed to work with my mini film crew and work within their time schedules too. I think if I had done this I would of been able to of edited out the manual puppetry scenes from the footage using keying techniques. Because I did not have time to organise this, you can clearly see my friends arm and hand in the promo video as he operated the robots arm and hand via the rear touch sensitive control panel. This really dose not bother me too much as it demonstrates the practical puppetry elements of the project quite clearly.

Another issue I had when filming was noise contamination, I was filming in-between lessons and team project sessions so it was very busy and very noisy at times. It was not really fair to ask other people to be quiet whilst I did my filming as it was the middle of the day and I would not expect them to have to leave or be quiet for the duration of the session. So, I decided that I would record the audio separately during the editing stages. This was not too hard to sync up with the original footage as it was from the same source (eliza chat). To do this I just run the program on my mac book and recorded the audio line in using the quicktime audio recording software on my imac. This was a really good way of getting clean audio for my videos. The general noise contamination was the main reason why I did not use voice recognition in the video as the mic struggled to define words. This is not such a big issue as the voice rec was originally set up incase of a viva examination as I wanted to include the robot in the discussion.

Evaluation One of the other more interesting elements of my video was the Scottish robotic voice modulator. This was a service I purchased online and I think it works really well in context of the subject matter and feel of the short. I had to flick through various voice's and accents to get the one I felt suited the video. I did consider using a child's voice sort of like the evil computer system from the resident evil films that the umbrella corporation created. But after consideration I did not feel that this voice had the presence and authority that I wanted for this video.

Analysis: The current situation of the project is still not exactly where I want it to be, the events of the past couple of months have had a major impact on the progress and direction I wanted my work to go in. This however is totally out of my hands and I have had to adapt to this new situation in the most professional way possible. As a bit of a perfectionist I really do not like working on such short deadlines, I have no problem working to dates providing I am given a bit of time to prep rather than just jump straight into something and have to make do and mend along the way (but I guess that's the nature of the beast). It has been a long while since I have had to work in this way and it is very stressful and has the potential to produced unwanted results. Luckily in this case, more positive outcomes where achieved than negative. If I was to do this element of the project over again I would most certainly asked for additional time to prep some the set work and maybe even book my own space to avoid noise contamination.

Conclusion: In conclusion, this session was fun it had lots of ups and downs but I cant say I did not enjoy the challenge. I keep asking myself if the end footage is what I imagined it would be, and the answer is no. I think the video could of been improved ten fold with some extra effects and filming techniques that would have really brought the character and environment to life. This is not to say that I am not please with the outcome of the video, it is just not the quality I would expect under normal circumstances. I always aim to get the most out of everything I do and strive to achieve the best results possible. In this case, the high expectation was always going to be an almost impossible task to achieve on my own in the 3 week time scale set by my tutor.

Action Plan: The action plan now is to wait until I get some feedback on the footage and then it will be in the hands of the examination team.