Wednesday, November 13, 2013

2 lines that must be on your <head>

In few words:

ensure that these lines are included in your <head>
<meta charset="utf-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"/>
<meta name="viewport" content="width=device-width, initial-scale=1.0"/>

Thursday, September 5, 2013

Take a photo from Javascript

Finally it is 2013 and you can take photos from your browser without requiring flash.

The API that makes it possible is getUserMedia ( http://caniuse.com/#search=getUserMedia ) and it is available in all modern browsers (requiring some vendor-prefixes, though).

For demo: use this fiddle

The code I paste here, takes a photo, dump it to a canvas and tries to upload it to a (non-existent) server.
- The key here is that both the captured image AND the uploaded image does not need to be the same size (you usually don't want to upload very big files). That is controlled via the OUTPUT_RATIO constant: the final size is given by the output canvas.
- Neither the video or the output are required to be shown, however is good to give visual feedback to users.
- NOTE: Chrome does not allow local files to get access to getUserMedia. You can use fiddle to make the test yourself.
- At the present, the api still needs to be prefixed depending on the browsers:
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia;



<!DOCTYPE html>
    <head>
        <meta charset="utf-8">
        <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
        <title>getUserApi</title>
        <meta name="description" content="">
        <meta name="viewport" content="width=device-width">
        <style type="text/css">
        video {
          background: rgba(255,255,255,0.5);
          border: 1px solid #ccc;
        }
        </style>
    </head>
    <body>
        <div id='text'>
            <p>You must grant access to the Camera first.</p>
            <p>Prompt would be shown above this lines, next to the address bar.</p>
            <button type='button' id='button'>Take photo</button>
        </div>
        <video id='video' width="640" height="480"></video>
        <canvas id='canvas' style="display:none;"></canvas>
        <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.10.1/jquery.js"></script>
        <script>
        (function(window) {

          var nav = window.navigator,
            doc = window.document,
            //some browsers behave differently
            is_webkit = nav.webkitGetUserMedia,
            is_mozilla = nav.mozGetUserMedia,
            showSnapshot = true,
            showVideo = true,
            OUTPUT_RATIO = 0.5, //the output is X times the captured image (ex: we upload small photos)

            source,
            video,
            canvas,
            button,
            ctx,
            localMediaStream;

          var
              initCamera = function() {
              video = document.getElementById('video'),
              canvas = document.getElementById('canvas'),
              button = document.getElementById('button'),
              ctx = canvas.getContext('2d');

              //make canvas and video the same dimensions
              canvas.width = video.width * OUTPUT_RATIO | 0;
              canvas.height = video.height * OUTPUT_RATIO | 0;

              //turn canvas to visible
              canvas.style.display = showSnapshot ? '' : 'none';
              video.style.display = showVideo ? '' : 'none';
              (button || video).addEventListener('click', takeSnapshot, false); //addEventListener: IE9+ Opera7+ Safari, FFox, Chrome

              // if (is_webkit){
              //   nav.getUserMedia('video', onSuccess, onError);
              // }else{
              nav.getUserMedia({
                video: true
              }, onSuccess, onError);
              // }

            },
            onError = function(e) {
              alert('Camera permission rejected!', e);
            },
            onSuccess = function(stream) {
                if (is_mozilla) {
                  source = window.URL.createObjectURL(stream);
                } else if (is_webkit) {
                  source = window.webkitURL.createObjectURL(stream);
                } else {
                  source = stream;
                }

                video.src = source;
                video.play();
                localMediaStream = stream;
            }, stopCamera = function(){
              localMediaStream.stop();
              video.style.display =  canvas.style.display = 'none';
              localMediaStream = canvas = ctx = null;
              button
            }, takeSnapshot = function() {
            if (localMediaStream) {
              ctx.drawImage(video, 0, 0, (video.width * OUTPUT_RATIO) | 0, (video.height * OUTPUT_RATIO) | 0);
              uploadSnapshot();
            }
          }, uploadSnapshot = function(){
              var dataUrl;

            try {
                dataUrl = canvas.toDataURL('image/jpeg', 1).split(',')[1];
            } catch(e) {
                dataUrl = canvas.toDataURL().split(',')[1];
            }
            $.ajax({
                url: "localhost:3000/uploadTest",
                type: "POST",
                data: {imagedata : dataUrl}, //in the server file.write(Base64.decode64(imagedata)) , https://gist.github.com/pierrevalade/397615
                contentType: "application/json; charset=utf-8",
                dataType: "json",
                success: function () {
                    alert('Image Uploaded!!');
                },
                error: function () {
                    alert("There was some error while uploading Image");
                }
            });
          };


            //some browsers use prefixes
          nav.getUserMedia = nav.getUserMedia || nav.webkitGetUserMedia || nav.mozGetUserMedia || nav.msGetUserMedia;

          if (nav.getUserMedia) {
            initCamera();
          } else {
            alert("Your browser does not support getUserMedia()")
          }
        }(this));
        </script>
    </body>
</html> 

Bonus

For large images you can save some bytes by sending the image as a blob instead of a base64 encoding.

First, encode the dataUrl as a blob

function dataURItoBlob(dataURI, dataTYPE) {
  var binary = atob(dataURI), array = [];
  for(var i = 0; i < binary.length; i++) array.push(binary.charCodeAt(i));
  return new Blob([new Uint8Array(array)], {type: dataTYPE});
}

Then, you have 2 options:

- using the FormData api

function uploadWithFormData(dataUrl){
  // Get our file
  var file = dataURItoBlob(dataUrl, 'image/jpeg'),
  fd = new FormData();
  // Append our Canvas image file to the form data
  fd.append("imageNameHere", file);
  // And send it
  $.ajax({
     url: "/server",
     type: "POST",
     data: fd,
     processData: false,
     contentType: false,
  });
}

- or using the XHR

function uploadWithXHR(dataUrl) {
  var file = dataURItoBlob(dataUrl, 'image/jpeg'),
  xhr = new XMLHttpRequest();
  xhr.open('POST', '/server', true);
  //add the headers you need
  // xhr.setRequestHeader("Cache-Control", "no-cache");
  // xhr.setRequestHeader("X-Requested-With", "XMLHttpRequest");
  // xhr.setRequestHeader("X-File-Name", file.name || file.fileName || 'image.jpg');
  // xhr.setRequestHeader("X-File-Size", file.size || file.fileSize);
  // xhr.setRequestHeader("X-File-Type", file.type);
  // xhr.setRequestHeader("Content-Type", options.type);
  // xhr.setRequestHeader("Accept","application/json, text/javascript, */*; q=0.01");
  xhr.send(file);
}

Friday, August 23, 2013

rubygems: uninstall some gems

I just discovored a way to uninstall only the gems that match a pattern (via https://coderwall.com/p/lpqmjq )

gem list [OPTIONAL PATTERN] --no-version | xargs gem uninstall -ax

for example

gem list hobo --no-version | xargs gem uninstall -ax

removes all 'hobo'

Successfully uninstalled hobo_jquery_ui-2.0.1
Successfully uninstalled hobo_clean_admin-2.0.1
Successfully uninstalled hobo_clean-2.0.1
Successfully uninstalled hobo_bootstrap_ui-2.0.1
Successfully uninstalled hobo_bootstrap-2.0.1
Successfully uninstalled hobo_jquery-2.0.1
Successfully uninstalled hobo_rapid-2.0.1
Removing hobo
Successfully uninstalled hobo-2.0.1
Removing hobofields
Successfully uninstalled hobo_fields-2.0.1

Sunday, August 11, 2013

JS: detect unsaved changes in a form

Here is a small jQuery plugin to detect changes on a form.

https://github.com/gsusmonzon/jquery.simple.unsaved

The tricky part is to store a hash of the serialization string instead of the full serialized form. The rest is not worth mentioning.

Friday, July 26, 2013

Solving ActiveRecord::ReadOnlyRecord in Rails 3


When you pass an SQL fragment to a finder, join or named scope, ActiveRecord returns read-only results by default.

Use readonly(false) in your queries to force that results are writable. Ex:

(Rails 3)
User.joins("INNER JOIN `cars` ON `cars`.`user_id` = `users`.`id` AND `cars`.`colour`  = 'electric blue'").readonly(false)

Sunday, July 21, 2013

Backup with Duplicity and Rackspace Cloud Files

Intro


Duplicity is a Linux tool for making backups of files an folders
  • supports full and incremental backups
  • supports encryption, by using GPG. ( You can have unnencrypted backups as well )
  • supports for many kinds of storage scp, rsync, Amazon S3 or Rackspace Cloud Files

My current setup is to make daily incremental backups and store them into Rackspace Cloud Files.
Once a fortnight you can do a full backup, and clean older backups (30 days old).
To use other storages you only need to change the last part of this guide.

During this article:

- Machine L is your local and personal machine. L == local
- Machine B is the machine with the data that we want to backup. B == backedup. Duplicity runs in this machine. I use Ubuntu 12.04 here.
- Machine S is the remote machine where we'll store the backup. S == storage

Step 1: Generate the encryption keys

We'll generate the keys in our local machine and export them to backup machine

( from http://www.debian-administration.org/articles/209 )

We'll need two gpg keys for our backups

- encryption key : the encryption key is used to protect the data in the backup files from snooping on the backup server
- signature key : the signature key is used to ensure the integrity of the backup files.

The private key for the signature key must be available to duplicity when it runs. Duplicity also requires the passphrase for the signing key be either entered manually or stored in an environment variable. (that means in the Machine B) If our encryption key and signature key are the same, then a compromise of the server means a compromise of the backed up data as well. We'll therefore use separate encryption and signature keys.

In your local machine, Machine L

sudo apt-get install gnupg

 generate encyption key

(in your local Machine L)

gpg --gen-key

  (and pick the default options: RSA & RSA + 4096 + never expires)
  passphrase: this is the encryption passphrase


with a result of

  gpg: key 5A87AAB8 marked as ultimately trusted
  public and secret key created and signed.

...


Do the same to generate your signature key, use a different paraphrase.
(in your local Machine L)

generate signature key

gpg --gen-key

  (and pick the default options: RSA & RSA + 4096 + never expires)
  passphrase: this is the sign passphrase

...

  gpg: key
927AE728 marked as ultimately trusted
  public and secret key created and signed.


To check that everything went well:

gpg --list-keys && gpg --list-secret-keys
/home/jesus/.gnupg/pubring.gpg
------------------------------
pub   4096R/5A87AAB8 2013-07-07
uid                  backuper-encrypt (Backup with duplicity)
sub   4096R/11122AE7 2013-07-07

pub   4096R/927AE728 2013-07-07
uid                  backuper-signature (Signature with duplicity)
sub   4096R/10E7002A 2013-07-07

/home/jesus/.gnupg/secring.gpg
------------------------------
sec   4096R/5A87AAB8 2013-07-07
uid                  backuper-encrypt (Backup with duplicity)
ssb   4096R/11122AE7 2013-07-07

sec   4096R/927AE728 2013-07-07
uid                  backuper-signature (Signature with duplicity)
ssb   4096R/10E7002A 2013-07-07

trust your keys before exporting

Now we are going to trust the keys before exporting them.

gpg --edit-key 927AE728
  > trust
  > 5
  > save

gpg --edit-key 5A87AAB8
  > trust
  > 5
  > save


and sign keys

gpg --sign-key 927AE728
gpg --sign-key 5A87AAB8 (not sure if this one is needed)

Once both keys have been created you need to export and copy the public encryption and private signature keys to the Machine B the safest way to do this is SCP/SSH ( (you'll need ssh access). You MUST keep safe and private the private encryption key and its paraphrase.

(in your local Machine L)
change the ip for the Machine B
cd /tmp
gpg --export -a 5A87AAB8 > backup.enc.pub.gpg
gpg --export-secret-keys -a 927AE728 > backup.sig.sec.gpg
gpg --export-ownertrust > backup.trust

scp backup.enc.pub.gpg backup.sig.sec.gpg backup.trust bob@192.168.33.10:/tmp
rm backup.*

Import keys in the backup server

Our backups are handled by root (full access to everything, and to keep signature passphrase private) so we need to configure duplicity logged as root in the Machine B. 
(in machine b)

sudo su
sudo apt-get install gnupg

cd /tmp
gpg --import /tmp/backup.sig.sec.gpg /tmp/backup.enc.pub.gpg
gpg --import-ownertrust /tmp/backup.trust

rm backup.*

Verify the keys were imported correctly. Check that the ID's are correct. The private encryption key was not transferred, so we expect only one entry for the secret keys.

gpg --list-keys && gpg --list-secret-keys

/root/.gnupg/pubring.gpg
------------------------
pub   4096R/927AE728 2013-07-07
uid                  backuper-signature (Signature with duplicity)


pub   4096R/5A87AAB8 2013-07-07
uid                  backuper-encrypt (Backup with duplicity)


/root/.gnupg/secring.gpg
------------------------
sec   4096R/927AE728 2013-07-07
uid                  backuper-signature (Signature with duplicity)



Note: If you didnt used the import ownertrust, trust the private key ( in case of untrusted key errors while running duplicity )

gpg --edit-key 927AE728
  > trust
  > 5
  > save


Step 2: configure duplicity to use Cloud Files

Install duplicity.  I use the latest version, which is not included by default in Ubuntu. I prefer to add a ppa for it and run the install via apt.

sudo apt-get -y install python-software-properties && sudo add-apt-repository -y  ppa:duplicity-team/ppa &&  sudo apt-get -y update && sudo apt-get -y upgrade

sudo apt-get -y install duplicity python-paramiko

Adding cloudfiles support

This step is only required if you are going to store backups on Rackspace Cloud Files. You'll find a lot more tutorials for using Amazon S3 .
To store backups in a remote server via scp or rsync it is even easier, and you did the hard part Jump to next step.

There are 2 ways of using cloudfiles, I use the new pyrax API. The old python-cloudfiles is now deprecated. Choose what works best for you.

option A ) using the new pyrax API

it is the official way, but as of July'13 it requires more manual tunning

sudo apt-get -y install python-pip python-dev build-essential
yes | sudo pip install pyrax && yes | sudo pip uninstall keyring
sudo apt-get -y install duplicity python-paramiko gnupg


In July'13 teh backend needed for pyrax is missing in duplicity. Then we need to copy the new backend ourselves  (at the present the backend for cfpyrax+http:// is missing). A backend is a 'module' that tells duplicity how to work with a storage like scp, rsync, s3, etc.
(remember we are root)

cd /tmp
wget https://bugs.launchpad.net/duplicity/+bug/1179322/+attachment/3735776/+files/pyraxbackend.py
sudo chown root:root pyraxbackend.py
sudo mv pyraxbackend.py /usr/share/pyshared/duplicity/backends/


sudo ln -s /usr/share/pyshared/duplicity/backends/pyraxbackend.py /usr/lib/python2.7/dist-packages/duplicity/backends/pyraxbackend.py


python -m compileall /usr/lib/python2.7/dist-packages/duplicity/backends


(in my machine B it was on python2.7/dist-packages, in yours, you can make  a `sudo find / -name backends` to find where to link to )

these steps enable duplicity to understand the scheme cfpyrax+http://
Note that it uses https even the scheme reads just http.

option B) using the deprecated python-cloudfiles api

sudo apt-get -y install python-stdeb
sudo pypi-install python-cloudfiles
sudo apt-get -y install duplicity python-paramiko


these installs enable duplicity to understand the scheme cf+http://
Note that it uses https even the scheme reads just http.

Step3: script for making the backups

In machine B, we set a cron task that runs daily. It runs as root and uses duplicity to make a backup and copy  it to Cloud Files (or the destination Machine S)

A base script for cloud files could be


CLOUD_CONTAINER="bob_backup" 
#required for CLOUD FILES SUPPORT 
export CLOUDFILES_USERNAME=my_username
export CLOUDFILES_APIKEY=4534534543543sd43434546456
export CLOUDFILES_REGION="ORD"
 
#required for duplicity 
export PASSPHRASE="passphrase for the sign key"
export SIGN_PASSPHRASE="passphrase for the sign key" 

options="--full-if-older-than 15D --volsize 250 --exclude-other-filesystems --sign-key 927AE728 --encrypt-key 5A87AAB8"
duplicity $options /var/log cfpyrax+http://${CLOUD_CONTAINER}
unset PASSPHRASE
unset SIGN_PASSPHRASE
unset CLOUDFILES_APIKEY

Note how duplicity is instructed to use the 2 keys and you pass the passhphrase of the signing key (this is safe since you need the private key of the encryption key AND its passphrase)

Duplicity generates 3 files (data, metadata and signature) each time it runs. These files will appear in your Cloud Files container.

As we are using the Cloud Files pyrax API, we use a cfpyrax+http:// uri. Change the usri scheme to cf+http://  for the old API.
If you back up to a server via scp or rsync change this remote uri accordingly.

For amazon, read this.

To avoid your backups grows too much,, add something like this at the end of the script

# Delete duplicity backups older than 30 days.
duplicity remove-older-than 30D --sign-key 927AE728 --encrypt-key 5A87AAB8 cfpyrax+http://${CLOUD_CONTAINER}

Verify the encryption.

To check that everything went well, we can tell duplicity to check the status of the backup.  We can do if from our machine B and the command to use is:
(emember to export all the cloudfiles variables first, as in the previous script)

duplicity collection-status --sign-key 927AE728 --encrypt-key 5A87AAB8 cfpyrax+http://${CLOUD_CONTAINER}

It will list all your backups and a comforting "No orphaned or incomplete backup sets found"

Testing the recovery

We need the private encryption key and its passphrase. Remember that we kept it in our private Machine L. If you lost them, you wont be able to recover your backup.

Move to your local Machine where private keys are available, and install duplicity (and the support for Cloud Files: step 2).

To do a restore, you need to run the duplicity command with the restore option. You will be prompted for a passhphrase. This time use the encryption passhphrase.

The command is
 duplicity [restore] [options] source_url target_dir
but 'restore' is optional. Duplicity knows that we are restoring since the remote url comes before a local directory. When the url is the last parameter, duplicity does a backup.

#/bin/bash
# note, to run this script you need to have imported the PRIVATE KEY used for encryption
# gpg --import /tmp/backup.sig.sec.gpg /tmp/backup.enc.pub.gpg 
# and you must know the passphrase for the encryption key

DST_FOLDER=/tmp/restored_files 
mkdir -p $DST_FOLDER

CLOUD_CONTAINER="bob_backup"
#required for CLOUD FILES SUPPORT 

export CLOUDFILES_USERNAME=my_username
export CLOUDFILES_APIKEY=4534534543543sd43434546456
export CLOUDFILES_REGION="ORD" 
# no passphrase provided, so we'll be asked interactively

options="--sign-key 927AE728 --encrypt-key 5A87AAB8 --volsize 250"
duplicity $options cfpyrax+http://${CLOUD_CONTAINER} $DST_FOLDER

unset CLOUDFILES_APIKEY

The verify command is another useful command (in this Machine L)

duplicity verify [options] source_url target_dir

Step 4) Finishing

Just remember to keep your encryption key & passphrase safe, and to check on a regular basis your backups.

Sources

http://27smiles.com/2010/04/07/securely-backup-of-vps-with-duplicity-and-gpg/
http://spin.atomicobject.com/2012/06/14/encrypted-offsite-backups-with-duplicity/
http://www.debian-administration.org/articles/209
for integration with cloud files

http://www.uno-code.com/?q=node/184
http://blog.chmouel.com/2011/01/06/backup-with-duplicity-on-rackspace-cloudfiles-including-uk-script/

Monday, May 20, 2013

Mysql Fix Illegal mix of collations

If you get an error like this while using MYSQL

Mysql2::Error: Illegal mix of collations (latin1_swedish_ci,IMPLICIT) and (utf8_general_ci,COERCIBLE) for operation

this is probably due to using or mixing different collations in a select: in my case I was joining columns with different collations. How to fix that:

I set prudent defaults to my database so that it wont happens again:
mysql>>
ALTER DATABASE `database_name` CHARACTER SET utf8 COLLATE utf8_unicode_ci;
ALTER DATABASE `database_name` DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_unicode_ci;

or in a rails migration
execute("ALTER DATABASE `#{ActiveRecord::Base.connection.current_database}` CHARACTER SET utf8 COLLATE utf8_unicode_ci;")
execute("ALTER DATABASE `#{ActiveRecord::Base.connection.current_database}` DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_unicode_ci;")
Then fix the tables you need to. If you dont know what are all problematic tables, ask the db:
SELECT table_schema, table_name, column_name, character_set_name, collation_name  FROM information_schema.columns  WHERE table_schema = 'database_name' AND collation_name <> 'utf8_unicode_ci' ORDER BY table_schema, table_name,ordinal_position;
    or

SELECT table_name FROM information_schema.columns  WHERE table_schema = 'databse_name' AND collation_name <> 'utf8_unicode_ci' GROUP BY table_name;
then for each table

ALTER TABLE table_name CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;



You can put all of this stuff in a single Rails migration.

Profit!

Sunday, May 5, 2013

Testing with Robolectric in Android

This is a quick guide of how I set up a testing environment for our Android application.

We use Robolectric and Mockito instead of the android tools. The key benefit of this setup is the *speed*: tests run in a Java Project, bypassing the emulator (android tools make tests run in the emulator).

Maven: not today

Robolectric documentation advices to install it thorugh maven. However I was unable to mavenize our project. In fact, making Eclipse (we use eclipse) to play nice with maven corrupted my ADT twice. So, we will use robolectric without maven.

Preparation

Make sure that your android tools are in the system path. Open a terminal, if android -h is not recognized then you have to get the path of yout tools, and the update your .profile or .bashrc with somelink like these lines at the end (use your path):


ANDROID_HOME=/home/jesus/bin/adt-bundle-linux-x86_64/sdk
export ANDROID_HOME

PATH=$PATH:$ANDROID_HOME/tools:$ANDROID_HOME/platform-tools
export PATH
 
reload your profile: source ~/.profile 

I recommend you to download the source code of Robolectric from https://github.com/pivotal/robolectric and the sample project and a sample project https://github.com/pivotal/RobolectricSample . I use them as documentation when official fall short.

Note that we can use the pom.xml in the robolectric project ( https://github.com/pivotal/robolectric/blob/master/pom.xml ) to know which versions of each library are safe to use when downloading the dependencies. I call it the pom trick.

Eclipse project

This is key to understand. Robolectric runs as a java application, not as an android application. So our test project will be a Java project, not an android project or an android test project. Our run configuration will be a Java junit configuration, not an android test run configuration. We see it now.

Create a new Java project File >New > Java Project . I name it as my Android project to test + 'Test'. For example Microhealth and MicrohealthTest. This new project will be our test project vs the android project. Finish.



Create a folder called libs where we put the libraries we need to run robolectric. I usually do it in the file explorer and the I press 'Refresh F5' in eclipse. What libraries do we need? It might change with newer versions of robolectric, but at least:
- Robolectric. Get it from http://pivotal.github.io/robolectric/download.html ( that in turn redirects to sonatype ) Download the latest robolectric-X.X.X-jar-with-dependencies.jar In my case I using robolectric-2.0-alpha3-jar-with-dependencies.
- Junit 4: We need Junit 4 from http://junit.org/  Not all versions are compatible with Robolectric. I am using now junit-4.10.jar and discarded newer versions. (or use the pom trick i described before)
- Mockito: get mockito-all-1.95.jar from http://code.google.com/p/mockito/downloads/list
- android.jar:  get it from your android installation got to your sdk_root in sdk root/platforms/android-9/android.jar (I am using the 9 as the min version, change it to yours)
- in case you need maps: get maps.jar from sdk root/add-ons/addon-google_apis_google-9/libs/maps.jar I usually skip this part.
 - FEST libs. These are required by robolectric to make writing tests less verbose. It tookme some time until I get the right versions of FEST but you can use the pom trick as well. fest-assert-core-2.0M10.jar and fest-util-1.2.5.jar (download from http://mvnrepository.com/artifact/org.easytesting/fest-assert-core/2.0M10 )
- hamcrest-all:  hamcrest-all-1.3.jar from http://code.google.com/p/hamcrest/downloads/list

Once you have all of them in your test project /libs folder. Declare you wnat to use them: right click on the test project > Properties > Java build path > Libraries > Add Jars and add them.

Make sure that robolectric and its dependencies (including JUnit) appear before the Android API jars in the classpath. In the properties > Order and export > move the android.jar and maps.jar after all other libraries


Require you android project in the build path. Make sure than Properties > Java build path > Projects,  references to you android project (Add > your android project)

Thats all. Now are going to test our setup.

(I found the official guide a bit outdated http://pivotal.github.io/robolectric/eclipse-quick-start.html , but maybe it works better with older robolectric versions)

Our first test

create a new class in the test project. This will contain out first test. Something like:


package com.microhealth.test.testicle;

//Let's import Mockito statically so that the code looks clearer
import static org.junit.Assert.*;
import static org.mockito.Mockito.*;

import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.robolectric.RobolectricTestRunner;

import android.app.Activity;
import android.content.Context;
import android.os.Bundle;
import android.util.Log;


@RunWith(RobolectricTestRunner.class)
public class FooTest {
  
  @Test
  public void testDummmy() throws Exception {
    assertTrue(true);
  }
  
  
  @Before
  public void setUp() throws Exception {
    //no need to setup nothing
    
  }
  
}

The test MUST be run as a Junit test NOT as an android test. Go to Run > Run Configurations menu, and create a new JUnit test configuration (the name is not important).  Do not create an ‘Android JUnit Test’.
  • Set the test runner as JUnit 4. (tab test)
  • check the 'Run all tests in the selected project, package or source folder' and choose your test project. (not the android project)
  • Locate at the bottom, the link:  Multiple launchers available — Select one….  Click the Select other… link, check Use configuration specific settings and choose Eclipse JUnit Launcher. Remember taht the test project runs as a Java project, we dont want it to run as an Android one.
  • in the 'Arguments' tab, configure the working directory as the android directory. In the 'Working directory' section, check 'Other' > 'Workspace' and locate your android project. Select it.
Click on 'Run' to save and make your tests run. If everything is ok, your first test run and pass. 


I write a future post I will explain how to set up tests to run a project that depends on DataDroid and ActionBarSherlock.

Tuesday, March 5, 2013

Autologin and lock in Ubuntu 12.10

It is very handy to set your computer to log into your account and then lock the screen. And it is very easy to set up in Ubuntu (I am using Ubuntu 12.10 w Unity).

1.- Set your screen saver to lock your station:
Settings > Brightness and lock and set on the lock options:


2.- Set your account to do autologin
Settings > User accounts > Automatic Login (remember to unlock the settings if you cant)

Once it is changed, Lock again the settings.

3.- Add your screen saver to be run on boot
Startup Application Preferences > Add
then add an entry for the screen saver:

Name: (whatever)
Command: /usr/bin/gnome-screensaver-command -l
Comment: (whatever)

Save.

For the locking command, I used:
gnome-screensaver-command --lock

or

xdg-screensaver lock


4.- Bonus: I like to add my email program to the start up, so similarly


Done! Next time your machine boots, the screen will be locked with your screensaver.