cloudtask: Execute a task in the cloud


cat jobs | cloudtask [ -opts ] [ command ]

cloudtask [ -opts ] --resume=SESSION_ID


Remotely execute in parallel a batch of commands over SSH while either automatically allocating cloud servers as required, or using a pre-configured list of servers.

Cloudtask reads job inputs from stdin, each job input on a separate line containing one or more command line arguments. For each job, the job arguments are appended to the configured command and executed via SSH on a worker.


command := a shell command to execute. Job inputs are appended as arguments to this command.


--force: Don't ask for confirmation

--hub-apikey=APIKEY: Hub API KEY (required if launching workers)
--backup-id=ID  TurnKey Backup ID to restore on launch
--ec2-region=REGION: Region for instance launch (default: us-east-1)

      us-east-1 (Virginia, USA)
      us-west-1 (California, USA)
      eu-west-1 (Ireland, Europe)
      ap-southeast-1 (Singapore, Asia)

--ec2-size=SIZE: Instance launch size (default: m1.small)

      t1.micro (1 CPU core, 613M RAM, no tmp storage)
      m1.small (1 CPU core, 1.7G RAM, 160G tmp storage)
      c1.medium (2 CPU cores, 1.7G RAM, 350G tmp storage)

--ec2-type=TYPE: Instance launch type <s3|ebs> (default: s3)

--sessions=PATH: Path where sessions are stored (default: $HOME/.cloudtask)

--timeout=SECONDS: How many seconds to wait before giving up (default: 3600)
--user=USERNAME: Username to execute commands as (default: root)
--pre=COMMAND: Worker setup command
--post=COMMAND: Worker cleanup command
--overlay=PATH: Path to worker filesystem overlay
--split=NUM: Number of workers to execute jobs in parallel

--workers=ADDRESSES: List of pre-allocated workers to use

                path/to/file | host-1 ... host-N

--report=HOOK: Task reporting hook


    sh: command || py: file || py: code


    seq 10 | cloudtask echo
    seq 10 | cloudtask --split=3 echo

    # resume session 1 while overriding timeout
    cloudtask --resume=1 --timeout=6