use the loadImage function to load an image from my data folder into my Sketch
declare a boolean (pixelMode) for the if test
use the image function to display the new images (dimensions specified in copWidth and copyHeight and areas specified by the pixel-swapping if test) in the window.
I also learned about the get and set methods and how they can be used to define a region and change the colour of the loaded image. Rad!
Making my way through the Working with colours recipe from the Processing 2 Creative programming cookbook source code on GitHub. In this recipe I learned how to use the color function to create a variable of the type colour (c = colour variable), the stroke function to set the colour of the stroke, the fill function to set the fill colour of the shape. I also added an additional +130 to the height of displayHeight. I wanted the size and position of all the shapes to be relative to the window size (displayWidth, displayHeight), but I haven’t figured out how to do that yet.
I understand this recipe is about drawing basic shapes and not about size and positioning, but there’s something rather inflexible about the way each shape is drawn. I’d prefer to scale and position each shape relative to the size of the window.
Extension task 1: As an extension to this recipe, I think I’ll explore relative scale and positioning in an attempt to make my Sketch that little bit more flexible.
Extension task 2: I wonder if it’s possible to detect or determine the display size of the environment/user agent. If so, objects may be able to be drawn and positioned relative to the display size. My task is to find out and then create a few example sketches.
Making my way through the Keyboard interaction recipe from the Processing 2 Creative programming cookbook source code on GitHub. In this recipe I learned about using keyPressed, keyReleased and keyTyped functions to assign keys on the keyboard to execute code and also a bit more about if tests and declaring variables, particularly changing their values when specified keys are pressed.
Making my way through the Mouse interaction recipe from the Processing 2 Creative programming cookbook source code on GitHub. In this recipe I learned about the mouseClicked, mouseDragged, mouseMoved, mousePressed and mouseReleased functions. I also learned about the mouseX, mouseY, pmouseX, pmouseY, mousePressed and mouseButton variable. The mouseButton variable allows you to determine if the left, right or middle mouse button has been clicked. This recipe is also the first time an if test has been used. They’re cool. I know I’ll be using them heavily to determine the display or actions in future sketches.
Making my way through the Maths functions recipe from the Processing 2 Creative programming cookbook source code on GitHub. In this recipe I learned about declaring variables and the abs, ceil, floor, round, sq, sqrt, min, max, and dist functions. I also learned how to use the println function to display output of functions to the Processing console window. Good for debugging a Sketch!
For this recipe the Processing window displayed at runtime isn’t used, but I still wanted to use the size() function.
This probably isn’t the most efficient way to draw lines to form a grid pattern, but this method is helping me to understand the coordinate system. The most efficient way would probably be to use a for loop. They’re next.
One of the items included in my list of ‘What’s next’ to be considered for development in future iterations of augmented contextual instruction is to depart from the use of proprietary augmented reality platforms and pursue the development of device and platform agnostic augmented contextual instruction and additional augmented reality experiences for display in web browsers built on on Web socketsWeb Real-Time Communication (WebRTC) and the Web Graphics Library (WebGL). The time for that departure is now. It’s now time for some self-directed post-scholarship activities.
The first step in that departure is to get started with WebRTC by working my way through a codelab tutorial tutorial explains how to build a simple video and text chat application. Although the tutorial does not prescribe specifically how to use WebRTC to develop AR experiences, I’m sure it will provide me with an excellent foundation of the skills and knowledge I’ll need later on. Besides, developing a simple video and text chat application is fun and useful.
Here’s the beginning of my simple video and chat application.
Calling myself in the same web browser window on my mobile phone. Step 3 of 8 in Codelab’s How to build a simple video and text chat application tutorial.
Start.
Permission to access camera and microphone.
Camera accessed. Ready to call.
This step was fun, but the next steps are when it gets really interesting!
My exploration of how we learn and how we design and develop stuff that helps us learn.