404 Not Found

Not Found

The requested URL /spam2/p002.txt was not found on this server.

Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request.

Posts tagged: db

Office Grid Computing using Virtual environments – Part 1

By , Friday 4th December 2009 11:23 pm

Introduction

I work in a company where we run many batch jobs processing millions of records of data each day and I’ve been thinking recently about all the machines that sit around each and every day doing nothing for several hours. Wouldn’t it be good if we could use those machines to bolster the processing power of our systems? In this set of articles I’m going to look at the potential benefits of employing an office grid using virtualised environments.

As a PHP developer I’m going to use tools that I use each day namely, Linux, mySQL, PHP, VirtualBox and subversion (SVN). However I hope this guide will adapt to other languages and technologies just as well.

The solution I provide will be very loosely based on the type of processing we’d need to achieve however this may not be true through the entire article as I’ll change things for simplicity, or to produce more interesting usage scenarios.

These virtualised environments will run on windows machines since this is what the majority of offices run. The processing that the office machines do should not interfere with staff using those machines, should require no maintenance at the machine, and be easily deployable to new machines as they become available. Also, new virtual machines should not require any additional configuration as this greatly reduces the scalability and ease at which the grid system can be extended.

Why Deploy an Office Computing Grid?

Firstly you may be thinking,why not just use a cloud computing resource such as Amazon’s EC2 platform? Well the reasons could be several, for example:

  • You won’t entrust certain data to a cloud computing environment
  • You can’t put certain data into a cloud computing environment for legal reasons (e.g. data leaving the country), potentially for legal reasons, e.g. NHS records.
  • You want to keep your processing units close and have full control over the hardware too
  • You don’t have the project funds to run cloud instances
  • Your office doesn’t have a connection to the internet and therefore its not possible to use a cloud resource
  • You don’t like rain, clouds suggest rain, therefore you keep well away

I’m sure the list could continue, but I think that’s enough for now.

Advantages of an Office Computing Grid

Well, lets do some maths (and in true physics style lets make some sweeping assumptions). Imagine you have big beefy processing server running 100 jobs per day. In your office you have 50 machines which are idle 16 hours a day, each of these machines is 10% as powerful as your beefy processing sever. (All results here are rounded to underestimate performance increase).

So, 1 machine * 10% power * 2/3 time = 0.067 i.e. 1 desktop processing in idle time could process 6 full jobs per day.

If you now scale this up it takes 15 idle desktops to process as many jobs per day as your main processing server does.

So in our pretend office of 50 machines we could increase our processing power from 1 server up to 4 full processing servers, or we could be processing 400 jobs per day instead of 100.

Notice, for no investment in new hardware your company has just increased its batch processing capacity 4 times! Potentially you’re going to increase your power usage but from most office environments I’ve been to machines are generally left on overnight anyway, so you could see this as a green initiative.

Other advantages also mean that investment in new (or updated) processing servers can be delayed if your office machines are sufficient and that as you improve the power of your office machines your office grid becomes more powerful automatically.

Technologies

What you need? (or more correctly what did I use):

  • Idle office machines (in my case a spare old windows XP laptop)
  • VirtualBox (or another virtualisation client software)
  • A virtual machine with PHP, mySQL running  running a cut down OS, I’m calling these my LiMP servers :)
  • Jobs to run
  • Job server (can be another virtual machine somewhere)

Typical Jobs

The types of jobs that this system is designed to run is as follows:

  • System receives a list of data upon which we need to match and return results
  • Matching involves checking/searching several (fairly static) data sources
  • Results from data sources may require further validation, merging, checking of additional data sources in response to results
  • Data is returned with matching records, fully validated and processed
  • Each record within a job is independent of the rest

So basically we’re looking at running jobs which require a mixture of database lookups and some number crunching, a fairly typical scenario in a business environment.

Grid solutions are not only advantageous for processing jobs of this type. Basically, any process which can be split into independent units can be run in parallel. See this wikipedia for examples and more information: Grid Computing, but a couple of famous examples are Seti@Home and BIONC. There are frameworks for running computing grids, and these are well worth looking into.

What will we achieve?

By the end of these articles I hope to show that deploying an office grid need not be hugely expensive or time consuming. I’m going to discuss:

  • Setting up the job control system, job configuration
  • Creating an appropriate processing virtual machine
  • How to setup the system on a windows machine
  • Ensuring you are using the latest code and data
  • Deployment and benchmarking
  • Looking ahead

I’ll be building (ok I built, then wrote this) an example application to test the concepts on a local machine using windows XP and my ‘GridMachine’ virtual machine. My job control server will be my main machine which runs Fedora 11.

This is in no way meant to demonstrate a fully working robust system, its meant more of a demonstration and discussing showing that these things can be achieved in a reasonably short space of time and at little cost. Please feel free to send me any comments, corrections, or improvements and I’ll do my best to keep this article updated to match.

Next time

In part 2 I will start by looking at the job control system, and look into how jobs should be configured in order to achieve greatest amount of processing whilst ensuring that each job is processed without fail.

Office Grid Computing using Virtual environments – Part 2

By , Friday 4th December 2009 11:23 pm

Introduction

I work in a company where we run many batch jobs processing millions of records of data each day and I’ve been thinking recently about all the machines that sit around each and every day doing nothing for several hours. Wouldn’t it be good if we could use those machines to bolster the processing power of our systems? In this set of articles I’m going to look at the potential benefits of employing an office grid using virtualised environments.

In Part 1 I gave an overview of the system and technologies I will be using as well as discussed some of the potential reasons why you would want to create an office grid.

Job Control

If you’re going to be running jobs then you’re going to need some way to manage them. Your job control system (on your job server) needs to be really well thought out before even attempting to run an office grid. So firstly, what are the tasks for a job control system:

  • Hand out jobs upon request from workers
  • Tell workers what type of jobs to run
  • Track jobs
  • Ensure that jobs are only run once
  • Provide job data to workers, or at least tell them where to get it

The system also needs to be extensible, a solution that works for now in a single case may be extended to run several types of jobs as the business sees the worth in a grid solution. For example, jobs may gain priorities, more than one job type may exist (i.e. several code bases), eventually you may even run several different worker machines that are optimised for each type of job (although that does move away from the ‘generic worker’ idea). Always try to think about the future when developing systems, a short term vision can lead to longer term frustration and increased development time.

Job Server

We’re going to need somewhere to control our jobs from, this should be the only system in your grid that has a fixed resource locator, be that an IP address, host name, URL (using internal DNS), etc. This is because the workers need to know where to look for jobs, workers need to find the job control system (not the job control system find the workers).

The job server itself doesn’t really have a complicated task (in a basic system anyhow), it needs to store a list of jobs, hand out jobs, receive results, and subsequently store them for later retrieval. How these parts (such as ‘hand out jobs’) are defined can be very basic. Later on we can extend the system to include an administration interface to add, edit, delete, suspend jobs but this is beyond this exercise.

There is no reason whatsoever then that your job server could not be a virtual machine running within your main processing server provided it doesn’t drain too many resources from it. The job server however does need high availability, if it goes down on a Friday evening you’re going to lose a whole weekend of processing, potentially costing you a couple of weeks worth of processing time (when compared to your main processing server alone). You may want to consider putting your job server on a load balanced environment for high availability.

Basic Setup

The basic setup for our job server will consist of what I’m calling one of my LiMP servers (that is Linux, mySql, PHP). The code running on the  workers will actually work out what jobs it can run by interacting with with job control system databases. Later on we could create a web service and actually hand out jobs rather than having the workers do the hard work themselves, but for now we’ll continue using the KISS principle (Keep it Simple, Stupid!).

So, lets create three mySQL tables to deal with jobs. These will be `jobs`, `jobRecords`, and `jobResults`.

jobs table Here I’m using SQL Buddy a great little alternative to phpMyAdmin just because its easier to install on centOS (for others see: 10 Great alternatives to phpMyAdmin)

This table consists of 5 simple fields,

  • id: Uniquely identify the job
  • name: Could be a client reference, or any number of other identifiers
  • Status: You need to know where the job is at, e.g.
    • 0: Not started
    • 1: Picked up
    • 2: Completed
  • started_by: Who’s started doing the job? This isn’t entirely required but is a nice to have. I’d suggest tracking workers by their IP address on your network
  • started_at: When did the worker start the job? By tracking jobs that have not completed within X amount of time we know we need to pick up the job once again and start processing by another worker. Workers could stop processing/go offline for any number of reasons, power failure, crash, network loss, etc.

It is easy how this table could be extended with a few additional fields to allow for statistics tracking, a finish time column to see how long the job took, a counter to see how many workers picked up the job (obviously this needs to tend to 1), job priority, the list can go on and on. In more complex job scenarios it would be possible to specify how much memory the worker would need access to (and therefore only use suitable workers), or even what type of worker would be required.

Lets add a few example jobs:

example jobs

The next table again is quite simple to understand, these are our job records. They are linked to the main jobs table by a column `jobs_id`. The make up of this table very much depends on the data that you need to supply to your workers, lets make a very simple example where we have four columns:

  • id: ID of the record
  • name: Person’s name
  • address: Person’s address
  • jobs_id: The job ID that this record is linked to

The third and final table consists of a results table, it has much the same make up as our records table, and with the addition of some columns could be part of the records table:

  • job_record_id: Link the result to the job table
  • result: The result data

…and that’s all you need for job control! (albeit at a very basic level) In my case I’m pointed to another table where my data to process was located, but this could just as easily been a file, parameters to run simulation code, you name it.

Selecting a job

As stated previously, the workers will do our job management for us for now, so all we need to really do is find a job that needs processing and get the information. How would we do this? Well pick our job selection criteria and look for jobs, in SQL I did the following:

  1. Take any jobs that are not marked as complete but from our worker and reset them (substitute __ME__ with an identifier, easiest would be IP address):
    UPDATE `jobs` SET `status` = 0 WHERE `status` = 1 AND `started_by` = __ME__;
  2. Using our job selection criteria, select a job and tell the control system that this worker is dealing with it:
    UPDATE `jobs` SET `status` = 1, `started_by` = __ME__, `started_at` = NOW() WHERE `status` = 0 OR
    (`status` = 1 AND `started_at` > DATE_SUB(NOW(), INTERVAL X HOUR)) ORDER BY `id` ASC;

    By grabbing jobs that haven’t returned results in X amount of time we ensure that all jobs are run in the event of a worker crashing or going AWOL.

  3. Next grab the jobs details followed by the records themselves:
    SELECT * FROM `jobs` WHERE `started_by` = __ME__  LIMIT 1;
    SELECT * FROM `job_records` WHERE `id` = __JOBID__;

Upon completion of the job we insert our result records and mark the job as complete. Remember as jobs can suspend/resume at any time allow for some robustness in your script. It might be that the task suspends half way through updating the job control system, so checking the number of records in a job and the number of results saved back to the job control system would be a wise move.

In addition, whilst this demonstrates how jobs can be selected and managed from an SQL-query frame you should really be abstracting your job control so that if you decide to switch to using a web service, a file based system, XML, or any other number of systems it will not affect the code above it.

Job Configuration

The next aspect to consider is job size and configuration. By playing with job configuration we can strike an excellent balance between speed, process replication, and reliability. Take a couple of  scenarios:

  1. Jobs take 1 day each to run: This means that your workers need 15 days to process each job (remember 10% of the power for 2/3rds of the time). This is clearly not a wise configuration, your job size is way too big! It would take at least double the time to get a job processed should the initial worker go AWOL (time to pick up that it hasn’t returned a result plus reprocessing time). In an ideal you’d have at least one full job easily cleared by the end of each long idle period, that way you keep the jobs ticking over and at worst case a job would take two days to process should the first go missing.
  2. Jobs take 1 minute to run: This means that your workers take about 15 minutes to run each job. Whilst this may initially seem ideal, you gain additional work processing during lunch time, coffee breaks, meetings, etc this scenario puts strain on other areas of your system and introduces its own problems. For example, firstly your setup/processing time ratio is going to go right down, therefore losing system efficiency. Your network is going to be constantly streaming job information to the various workers frustrating staff who are dong their day to day work. You’re also going to put more strain on your job processing server as it has to dish out lots and lots of small pieces of work on a regular basis. Lastly, in this situation if your job server goes down you’re going to create a huge back log of uncompleted work whereas bigger jobs could of continued processing blissfully unaware that the job server was experiencing difficulties.

In reality there will be no one ideal configuration for your grid setup, much depends on the available resources, types of job, job turnaround time requirements, network capability, and so on. However some guidelines would be:

  • Size jobs so that each worker can get through at least 3-4 jobs in a period of 15 hours (the longest likely idle time period)
  • Play with the job size so that setup time becomes fairly insignificant compared to the processing time (bearing in mind the above point).
  • If a job doesn’t complete in double the amount of time (maybe less) you expect it to complete it assume that its gone AWOL and start processing it with another worker. This means you may have to wait up to three times the normal length of a job for it to complete (possibly longer if the subsequent job fails). You may want to reduce this time, but be careful not to reduce it too much as you may start duplicating processing tasks on a regular basis.
  • Jobs should be independent of outside requirements as much as possible. The job server, for example, should only be contacted at the start and end of every job.
  • Don’t saturate your network, this will have two negative effects, your daytime staff will find using the network frustrating and problems may be experienced with connections timing out a problem that will only get worse as you scale your grid.
  • Ensure jobs can run on your workers. If jobs become too memory intensive or disk space intensive jobs will start aborting and the only thing you’ll notice is a drop in number of jobs processed with no real reason why.

Submitting Results of a Job

When submitting the results of a job it is important to check that results have not been submitted by another worker, especially if the current worker has been dormant for some time.

When results are submitted ensure that the number of results matches the number of records within the job.

As stated previously, and can not be over emphasised, build fault tolerance into job retrieval and results submission. The workers can (and most likely will) go into suspend mode at the most inconvenient of times and this needs to be catered for. Also once again abstracting away your results submission will help cater for future changes to your job control system much easier to deal with.

Summary

In this section  we have looked at what a job control server needs to do and how to get a very basic system set up. We discussed how to retrieve a job from the control system and how best to configure jobs to get the most our of your office grid system. To finish, a paragraph or two on submitting results back to the job control server was presented.

  • A job control server manages jobs and ensures that all work units are completed
  • By abstracting your job select/results submission we can change the technology of the control server without much problems
  • Configure your jobs to ensure that they are run quickly and efficiently without putting too much pressure on your network infrastructure, and without duplicating processing tasks on a regular basis.
  • Ensure that you build fault tolerance and error checking  into your routines, workers can suspend and resume and the most inconvenient of times. Remember to check if results have already been submitted by another worker.

Next time

In part 3 we’ll create our virtual processing machine and set up our windows machines to become idle-time workers.

Zend Framework: Fundamentals – Review

By , Saturday 28th November 2009 10:42 pm

My employer recently paid for a group of us developers to take the Zend Framework: Fundamentals course, here I’ll summarise my thoughts and opinions on the course for others. For those looking to save time, here’s my summary:

For developers who haven’t had time to look at the Zend Framework this course (Zend Framework: Fundamentals) offers a good overall picture of the framework introducing you to the key areas and giving enough information in order to continue. For those who have spent time looking at the framework and have followed one or two tutorials this course does not offer much beyond.

Background

I’ve been a PHP developer for around 5-6 years, and have started working with the Zend Framework on a component basis over the last 6 months. I’ve developed and/or been a developer on a couple of small Zend Framework MVC sites.  I’ll be honest, I haven’t had a huge amount of exposure to other frameworks from a coding point of view but have spent several hours researching the project websites and evaluating them.  The framework and the community surrounding Zend Framework it is quite exciting and there seem to be huge possibilities in where its going.

About the Course

The course is delivered over 9 two hour webex sessions (with a 10-minute break in the middle). The time is spent going through a set of slides provided by Zend with discussion at any time. You can use a microphone to talk to the instructor, but to be honest I didn’t see anyone use anything more than the chat window. In addition a VMWare Ubuntu machine is provided that has example code and projects set up an a trial version of Zend Studio. The course leader talks to attendees either over an integrated VoIP solution, or you can dial in using one of the many worldwide dial in numbers.

During the course the material consists of a brief overview of the Framework and the MVC pattern before heading into a sample guestbook application. The discussion demonstrated bootstrapping, Zend_Application, Db Tables, Database access, Forms, Filtering, ACL, Validating, etc, etc. Basically covering all the topics you’d require to get a basic site up an running all the time giving you the tools to go and get more advanced in the framework (although this did amount to ‘See the website’ much of the time).

Time is given to code up some examples, and to develop the ‘guestbook’ and simple ‘wiki’ application. Personally I felt that providing the code or each app and then asking us to develop what was essentially a copy alongside didn’t really provide a good learning experience. I would have preferred to develop an application similar, but not identical. to the example application with the benefit of having a guide to refer to. Alternatively building the applications from scratch with the demonstrator would of possibly led to more questions about why and how, thus giving a better understanding of the framework, after all you can look up specifics after the course.

The last lecture consisted of working on the wiki application with help/guidance from the instructor. After the course feedback was taken, it was emphasised several times through the course that Zend takes feedback very seriously, in fact apparently our version of the course was quite new. Some of the other developers in the company will be taking the course soon so it will be interesting to see if this has happened.

The course style was informal, allowed for feedback and collaboration between attendees and the instructor. The course leader was friendly, approachable (email addresses were shared for questions), and whilst his presentation from the slides was a bit shaky seemed fully competent in the framework. He was clearly someone who used the framework on a regular basis rather than someone who is taught to teach the course, I liked the ‘real world’ experience in that respect.

Overall Feeling

In some ways I found the course a waste of time, in others it was very handy. Hopefully I’ll get my reasons across clearly, and maybe provide some food for thought or useful feedback (knowing me this is unlikely!).

For myself this course was aimed at too low a level. Having gone through the quickstart guide, read Rob Allen’s Zend Framework in Action, and worked with the framework a little I didn’t really get anything too much. I would of liked the course to pick up from the end of the quickstart and develop additional skills.

That said, the course title does clearly state “Zend Framework: Fundamentals” and in that aspect the course achieves what it sets out to do. Other members of the development team that haven’t spent the time looking into the framework finished each session with enthusiasm and asked questions which was really nice to see.

All was not lost, it was good to spend time confirming the basic details of the framework and get to ask a couple of questions in areas where I wasn’t 100%. It was also time that I got to sit down each day and think about coding using the framework and future projects, something I wouldn’t of been able to do otherwise (can you imagine your company agreeing to that? :) ). Last but not least you also get a nice certificate from Zend to say that you attended the course (albeit by email).

Zend Framework Certification

This was one question that kept coming to mind during the course, would it prepare me for the certification? The quick, easy is a resounding No. The course instructor was quite clear on that with the additional advice that for the certification you should really be using the framework on a day to day basis and feel very comfortable and confident in its usage and methodologies.

Summary

Given everything I’ve written above, I’ll summarise everything in two easy bullet points:

  • New to Zend Framework: This course does exactly what you’d expect, it gives you a nice introduction to the framework and a good grounding on the basics from which you can build. The course seems to generate interest and enthusiasm for the framework amongst developers.
  • Used the Zend Framework: While it was nice to shore up some of the very basics I felt the time, effort, and funds to take the course could of been better spent elsewhere. It will be nice to see  Zend create a new higher level course to take developers to the next level – at least to the standard of certification and beyond. For that I would sign up immediately.

Log to DB using Zend Framework

By , Tuesday 14th April 2009 9:06 pm

I’ve managed to get a site up and running with the Zend Framework, everything is logging nicely to FireBug/FirePHP so next step was to log to the DB. I also wanted to log some additional information using the framework such as user agent, date and time, get and post variables. So to extend the manual a little here’s what I did:

// Set up logging to DB
$db = Zend_Registry::get('dbAdapters');
$db = $db['general'];

$columnMapping = array( 'priority' => 'priority',
'message' => 'message',
'datetime' => 'timestamp',
'user_agent'=> 'user_agent',
'get_vars' => 'get_vars',
'post_vars' => 'post_vars',
'site' => 'site'
);

$writerDb = new Zend_Log_Writer_Db($db, 'error_logging', $columnMapping);
$logger = new Zend_Log($writerDb);

$logger->setEventItem('datetime',date('Y-m-d H:i:s'));
$logger->setEventItem('user_agent',$_SERVER['HTTP_USER_AGENT']);
$logger->setEventItem('get_vars',print_r($_GET,true));
$logger->setEventItem('post_vars',print_r($_POST,true));
$logger->setEventItem('site',SITE);

$logger->info('Informational message');

Where the array keys in $columnMapping are my column names. ‘Priority’ and ‘message’ are understood by Zend_Log_Writers but the additional fields were added to give me some additional information.

Obviously this assumes that you have logging working using one of the other writers first :)

Panorama Theme by Themocracy

2 visitors online now
1 guests, 1 bots, 0 members
Max visitors today: 6 at 05:14 pm UTC
This month: 22 at 01-12-2014 01:03 am UTC
This year: 79 at 09-06-2014 07:22 pm UTC
All time: 130 at 28-03-2011 10:40 pm UTC