Thursday, December 31, 2009

Linda, Tuples, Rinda, DRb & Parallel Processing For Distributed Computing

When building enterprise solutions often you start to get into orchestration of your data and process and really you just need to distribute your load. Traditionally this has been solved using MessageQueues, the occasional multi-threaded server (filled with pools and queues) etc, etc, etc. More recently solutions like Hadoop, Gearman, et el have sprung up but one way to solve this problem is with your own solution implementing Linda and tuple spaces.

There are a lot of open source and commercial software that can help with this paradigm but when you are building your own software then you sometimes just need straight up inter-process communication (without having to build your own socket based messaging platform) so that your software can pass objects off to another process (literally) in your architecture and parallelize the "crunching" of your data accordingly.

Now, when this inter-process communication is available across the network and even self aware (meaning that clients can find the server to register to get the data it needs) we really have a powerful solution... but I digress.

"In computer science, Linda is a model of coordination and communication among several parallel processes operating upon objects stored in and retrieved from shared, virtual, associative memory. This model is implemented as a "coordination language" in which several primitives operating on ordered sequence of typed data objects, "tuples," are added to a sequential language, such as C, and a logically global associative memory, called a tuplespace, in which processes store and retrieve tuples." http://en.wikipedia.org/wiki/Linda_%28coordination_language%29

Ok, Ok, enough of the esoteric academic theory... let me introduce you to Rinda.

Rinda is the Ruby implementation of Linda and Rinda is a built in library to Ruby (specifically DRb which is Distributed Ruby). Yes this comes out of the pervebial box of Ruby 1.8.1 and greater.

What DRb elegantly provides you with Rinda is a RingServer which is basically a solution to manage the tuple spaces and a service for auto-magically finding the server providing you all of the inter-process (and via network) communication with your tuple spaces.

Without further ado I would like to send you here http://segment7.net/projects/ruby/drb/rinda/ringserver.html for your first look at Rinda as I found it especially useful and within 15 minutes I had it read, understood and implemented in my software.

Now if you do not know Ruby this might be a good reason to learn it.

Or (if you are like me) do not care what language you use [just using the language to implement solutions for the problems that are at hand] then you can check out some other Linda implementations. I have never used any of these yet but I am sure I will.

# Linda for C++
# Linda for C
# Erlinda (for Erlang)
# Linda for Java
# Linda for Prolog
# PyLinda (for Python)
# Rinda (for Ruby)
# Linda for Scala (on top of Scala Actors)

As cloud computing continues to evolve solutions like Linda and various implementations could become more and more how software frameworks will be implemented as multi-threading is a dead end (better than a dead lock) when trying to parallelize processing.

Sometimes all you need are chopsticks to catch a fly =8^)

/*
Joe Stein
http://www.linkedin.com/in/charmalloc
*/

Thursday, October 29, 2009

Los Angeles goes to cloud computing with Google

It is somewhat appropriate that the city of Angels makes this move to get into cloud computing.

What is even more ironic is that they are doing it with Microsoft’s money.

http://www.cio.com.au/article/324089/google_apps_scores_la_assist_from_microsoft

“Google has pushed Google Apps as an option for government agencies, promising to ship a product called Government Cloud, which will be certified under the Federal Information Security Management Act (FISMA), sometime next year”

"According to a Sept. 15 memo from the Los Angeles Information Technology Agency, Google will "provide a new separate data environment called 'GovCloud.' The GovCloud will store both applications and data in a completely segregated environment that will only be used by public agencies.""

This is a big win for cloud computing on a few fronts as it continues to be seen as a way to save money while keeping (and at times enhancing) the confidentiality, integrity and availability of information systems.

/*
Joe Stein
http://www.linkedin.com/in/charmalloc
*/

Wednesday, October 28, 2009

Seeing through Windows into the Cloud at the Eclipse

Microsoft(r) has announced http://blogs.msdn.com/interoperability/archive/2009/10/28/tasktop-soyatec-microsoft-to-foster-eclipse-and-microsoft-platform-interoperability.aspx collaboration for interoperability between Eclipse (my favorite Java IDE) and Microsoft Windows(r).

There are a couple of great highlights here and some fluff.

First the fluff (nothing wrong with looking nice while out on the town). Eclipse is going to be made useful for "next generation" user experience development for Windows 7 features.

Now on to the more exciting juicy pieces.

Microsoft has collaborated with Soyatec, a France-based IT solutions provider, to develop three solutions: These will open up the Azure cloud solution to not be 100% Microsoft based as well as give Microsoft a new following for it's Silverlight client framework in a community often with Sun in their eyes. More than anything this will open up the storage arena for MS to play a part.

Along with the SDK there is a Storage Explorer of Windows Azure Tools for Eclipse—it allows developers to browse data contained in the Windows Azure storage component, including blobs, tables, and queues. Storage Explorer was developed in Java (like any Eclipse extension), and they realized during the Windows Azure Tools for Eclipse development with Soyatec that abstracting the RESTful communication aspect between the Storage Explorer user interface and the Azure storage component made a lot of sense. So this led them to package the Windows Azure SDK for Java developers as open source, which is available at www.windowsazure4j.org.

Their interoperability strategy and open source direction is becoming competitive.

/*
Joe Stein
http://www.linkedin.com/in/charmalloc
*/

Wednesday, October 21, 2009

Mobile Internet Outpaces Desktop Internet Adoption

Mobile internet is taking off faster than the desktop.

iPhone + iTouch users = 8X AOL Users 8 Quarters after launch.



Mary Meeker's Awesome Internet Presentation From Web 2.0 http://www.businessinsider.com/henry-blodget-mary-meekers-internet-presentation-from-web-20-2009-10#mobile-adoption-curve-far-steeper-than-desktop-5 (Morgan Stanley). Click Here for the entire presentation.

/*
Joe Stein
http://www.linkedin.com/in/charmalloc
*/

Cloud computing is not about providing a software architecture for scale... that is what Open Source does.

Recently I heard the comment "WOW, elastic cloud computing is great. I can take on a lot more stress with any load and in a few minutes stand up an instance to accommodate usage on demand and keep the app running without long term cost or even contractual commitments". While this person is right they did not know that by starting another instance you are likely just turning on another problem if the software application was never designed to be distributed.

Cloud computing (and the "elasticity" it can provide with Infrastructure as a Service IaaS) is not about providing a software architecture for scale. Let me repeat this again, cloud computing is not about providing a software architecture for scale. So what is it then you ask?

Cloud computing provides an on-demand infrastructure so that your well designed distributed enterprise software application can quickly scale based on the spikes and valleys of usage and interactions of your system (pay as you go for only what you need). IaaS is about hardware resourcing given to your software to reduce expenses but if your software is not designed to take advantage then the opposite will happen with CPU & Memory running away with a false sense of security.

The issue is often that the internal workings of a software system are designed (to coin the phrase) "cloud monolithic". This means that software is usually designed to execute on a single server with a database (often a cluster) and to scale it you just add more servers and put the clusters together. Over the last 3-5 years many *VERY* large cloud based services have emerged and they have open sourced the solutions for how they scaled.

It is important to understand the inner workings of:

1) a-synchronous processing
2) global caching
3) distributed and parallel processing

Without all three of these patterns working together you will actually compound your stress with load bottleneck for each blocking call inside of your software. Your safety net of cloud computing turns into the proverbial wet blanket faster than it ever did before.

Lets break each of these patterns out and how they apply and what solutions exists. In another post I will explain how these apply when dealing with a ridicules amount of information processing on a large scale with the time of process exponentially reduced (because of using the algorithms for map/reduce). I bring this up now because the way the map/reduce algorithms achieve this ability to handle and process so much information exponentially faster is realized from the software written to implement them which also use the technologies that are explained here.

1) A-Synchronous Processing - Ok well this is not really a new one and often there are too many solutions to choose from all with their own pros & cons ( you have to make this call yourself ). Queuing systems have been around for a long time and have numerous implementations in the marketplace. It is so numerous that often each language has it's own set of queue servers to choose from. This being said they are often NOT used correctly because #2 & #3 are not implemented also. I have seen many systems make use of an a-synchronous process which allows the bottleneck of a blocking synchronous call to more expediently return creating a perceived performance gain. The problem here is that you are just passing the problem for another process to either eat up unnecessary cycles or not utilize unused cycles on other parts of your infrastructure (ultimately requiring you to get more servers or now turn on more instances).

Creating a performant software application is about taking both synchronous and a-synchronous operations and making sure that they are utilizing information that has already "crunched" by other parts of the infrastructure [#2 global caching] and maximizes the hardware so the crunching happens on the parts of the infrastructure that currently has the least "crunching" occurring [#3 parallel distributed processing].

So now maybe you are getting the problem and solution so here is how to implement it.

Global Caching with Memcached http://www.danga.com/memcached/. "memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.".

What this means is two fold. Instead of your application querying the database for information it firsts checks the cache to see if that information is available. What makes this more powerful than using some static variable or custom solution is that memcached is a server that runs on EVERY machine that an instance of your application is running on. So, if server 3 pulls information from the database and adds it to the cache it is a "global cache" that the memcached server replicates for ALL instances/servers to make use of. This is extremely powerful because now every instance of your application is benefiting when all parts of the infrastructure are being used. In this scenario now you have un-compartmentalized "crunching" to no longer have to repeat some "crunch" of information to get to a result that another instance/server has already gotten to for their request/response.

Now this is a HUGE reduction to what often stresses a system but in of itself will not reduce the process to the degree that we are trying to get to because "at the end of the day" that "crunching" still has to occur. The crunching in the memcached implementation will still happen (hopefully a-synchronously once you find that it is not in your global cache and you have to "crunch" =8^0 ).

Now you need to crunch because your data is not memcached or perhaps you have to crunch for some other reason (that is what software does, right?) Just moving this to happen in the background off and onto another process provides no benefit within a multi-server environment.

A la "distributing the processing" and "executing it in parallel" which is where Gearman comes in http://gearman.org/. "Gearman provides a generic application framework to farm out work to other machines or processes that are better suited to do the work. It allows you to do work in parallel, to load balance processing, and to call functions between languages. It can be used in a variety of applications, from high-availability web sites to the transport of database replication events. In other words, it is the nervous system for how distributed processing communicates."

Both memcached and Gearman are servers with a great following with multi-language client implementation support. They are written in C so that they will execute better than if the had to deal with an interpreter or virtual machine. They might be over kill but if you find yourself with bottlenecks I hope you think about your design and internal architecture of the system before you throw more hardware at your problem (especially now that this can be done with a few clicks to launch an instance).

/*
Joe Stein
http://www.linkedin.com/in/charmalloc
*/

Friday, October 2, 2009

Ruby on Rails with OAuth for integrating TripIt

Here is an example of utilizing OAuth with TripIt using Ruby on Rails.

OAuth is an open protocol to allow secure API authorization in a standard method for desktop, mobile and web applications. http://en.wikipedia.org/wiki/OAuth

TripIt is an interesting social networking application for your travel itinerary http://www.tripit.com/

All the examples below are to be run in "irb" but work fine in your rails app. You need to figure where to store the variables I pass in (this is up to you and how your app is setup of course).

First things first... you need a TripIt developer account http://www.tripit.com/developer. Make sure you add an application (call it what you like but you need to submit it so that you get an "API Key" and an "API Secret".

For our example (since I do not want to give you mine NOR should you give out yours to others) I will use "api_key_shhhh" and "api_secret_shhhh" as the values that you will getting from TripIt.

Now before we get started make sure you go into your regular user account on TriptIt and a add your self a trip (or more). This example will list trips so you need them to see the XML we are going to query through the API.

lets get "oauth" installed now

gem install oauth

Ok, now to the code (all of this is for irb but you can have it work in your rails app, no probelm).

gem 'oauth'
require 'oauth/consumer'

api_key = "api_key_shhhh"
api_secret = "api_secret_shhhh"

@consumer=OAuth::Consumer.new api_key,api_secret,{:site=>"https://api.tripit.com"}

@request_token=@consumer.get_request_token
@request_token.secret

#ok now in your rails app you want to redirect and create (dynamically) the URL we are creating by hand which we will copy and paste it.

OK NOW THIS IS IMPORTANT. Do not use @request_token.authorize_url because the URL is wrong. In your rails app you should dynamically create what we are about to-do by hand (concatenate yourself silly). ALSO, The URL that you put into your setting when creating the application... if it was localhost (or blank) this will not work but have no fear there is simple workaround by overriding in the URL.

There are a few important parts of the URL and you need to take the @request_token.secret value which for this example let me call it XXXXXXXXXXXXXXXX

Put this in your browser now https://www.tripit.com/oauth/authorize?oauth_token=XXXXXXXXXXXXXXXX&consumer_key=api_key_shhhh&oauth_callback=http%3A%2F%2Fwww.yahoo.com

TripIt will now ask the user if it is alright for the application (in the real rails app redirect_to the concatenated URL you made) you just created to access their account (in our example this should be your account you are granting your application access to). Now notice that oauth_callback. In "real world" rails app that should be YOUR application to accept the user back in to-do stuff which TripIt will redirect to when done. All of the URL has to be URL encoded and "consumer_key" is that first value you get from TripIt (NOT THE SECRET) when you submit your application.

Ok, now you are just about done.

TripIt trusts you and now you just have to save that trust to use later in your app.

To save that trust (back to your terminal irb picking up where we left off)

@access_token=@request_token.get_access_token
@access_token.token
@access_token.secret

NOW SAVE the token and secrete from the access token (where ever you like for THIS user).

Ok, last step now that the user has authorized now you want to keep using that authorization from that users to-do TripIt actions (they would get annoyed if you had to keep asking them because you skipped this step) for that user through the API.

copy both access_token.token and @access_token.secret you are about to need them

exit irb

now go back to irb so you can see it all still working fine.

gem 'oauth'
require 'oauth/consumer'

API_KEY = "api_key_shhhh"
API_SECRET = "api_secret_shhhh"

ACCESS_TOKEN = "@access_token.token"
ACCESS_SECRET = "@access_token.secret"

@consumer=OAuth::Consumer.new API_KEY,API_SECRET,{:site=>"https://api.tripit.com"} #, :consumer_key=>"consumer key working"}

@access_token = OAuth::AccessToken.new(@consumer, ACCESS_TOKEN, ACCESS_SECRET)
puts @access_token.get('/v1/list/trip')

and here you go now with your XML back from TripIt per their spec http://groups.google.com/group/api_tripit/web/tripit-api-documentation---v1

Make sure "@access_token.token" and "@access_token.secret" are the values you saw in irb and copied before you closed it as that variable is GONE.

/*
Joe Stein
http://www.linkedin.com/in/charmalloc
*/

Cyber Security in Government, ONLY JUST NOW?

So it looks like the Department of Homeland Security will be moving to create a more secure infrastructure (or something) to our government facilities that use "computers" by starting to hire cyber security analysts http://www.cnn.com/2009/POLITICS/10/02/dhs.cybersecurity.jobs/index.html?iref=newssearch.

What concerns me most about this is that 1,000 people seem to be a mad rush now to have something in place which I feel should have been there all the time. Is the NSA not cutting it or working nicely with the DHS? What role is the DHS looking to play now in an industry full of consultants and information technology folk within organizations fighting the good fight.

Let me be the first to label these folks as "Blue Hat" (since Red & White are taken and Blue seems to make sense here... )



/*
Joe Stein
http://www.linkedin.com/in/charmalloc
*/

Thursday, September 3, 2009

Open Source Cloud Interoperability

The momentum in technology often picks up when developers converge to create Open Source solutions that can be used to solve interoperability issues. Arguably this is not required to be Open Source but I am of the mind that something magical happens when Open Source is the catalyst (e.g. "The Internet" Boom after the proprietary PC market had to contend with Linux...). When Open Source has the backing of an organization that is built around prospering with it the "stars begin to align".

Now the hat to make people blush for having talked about this yet done nothing has been thrown into the mix with an Open Source cloud interoperability solution.

RedHat has just announced DeltaCloud and "The goal is simple. To enable an ecosystem of developers, tools, scripts, and applications which can interoperate across the public and private clouds." http://press.redhat.com/2009/09/03/introducing-deltacloud/

http://deltacloud.org/

Start an instance on an internal cloud, then with the same code start another on EC2 or Rackspace. Deltacloud protects your apps from cloud API changes and incompatibilities, so you can concentrate on managing cloud instances the way you want.

Deltacloud gives you:

  • REST API (simple, any-platform access)
  • Support for EC2, RHEV-M; VMWare ESX, RackSpace coming soon
  • Backward compatibility across versions, providing long-term stability for scripts, tools and applications

One level up, Deltacloud Portal provides a web UI in front of the Deltacloud API. With Deltacloud Portal, your users can:

  • View image status and stats across clouds, all in one place
  • Migrate instances from one cloud to another
  • Manage images locally and provision them on any cloud
I am interested to see how Microsoft Azure responds (if at all) http://www.microsoft.com/azure/default.mspx

This is not the first one but I believe it to be another big push for cloud computing.

/*
Joe Stein
http://www.linkedin.com/in/charmalloc
*/

Thursday, August 27, 2009

Identity as a Service (IDaaS)

Introduction

Identity as a Service (IDaaS) is fundamentally the externalization to and management of identities in the cloud. This definition of IDaaS branches it’s meaning depending on the service being software, platform or infrastructure based for both public and private clouds. Identities can still be managed internally within an organization but externalized through a Service Oriented Architecture (SOA) creating a Platform as a Service (PaaS) layer (either public and/or private) to facilitate an I&AM cloud based solution. Identities can be externalized without having a SOA in place by having those identities managed in a cloud utilizing SaaS Machine images can also be created within an Infrastructure as a Service (IaaS)cloud environment so that pre-configured I&AM instances could be launched and used.

Cloud Security Challenges

The challenges for IDaaS are different not just from the perspective of the SPI (“SaaS, PaaS, IaaS”) but also in how security impacts specific stakeholders utilizing the cloud for an identity management solution. Corporate IT and R&D traditionally both manage users differently (internal to an organization and users within products/services respectively). The challenges of each inherently become different as a result and the challenges with IDaaS have to be applicable to each stakeholder. The challenges that consumers face having their identity serviced in a cloud environment also are very different and bring about issues of reputation that must start being considered (both from vendor & consumer).


Issues and Challenges

SaaS

Corporate IT: When implementing IDaaS essentially outsourced to another provider the privacy of your internal employee information needs to be considered. This implementation is non-federated so the credentials of users and the ability for those users (if comprised) to gain access to internal systems becomes possible. How passwords are stored, personal information protected, and the Software Development Lifecycle (SDLC) of their product all need to be inline to your policy as if they were internalized. Consideration for non-physical access to authenticat now needs to be considered. Consideration also needs to be given for how administrative access is granted and if this over the Internet creating an attack vector.


R&D:
Products/Services face different challenges when externalizing I&AM service to software managed by others. Depending on which of the specific parts of the solution being provided is coupled to your solution determines how what security to incorporate into your product development lifecycle.


Consumers:
There is of course a need to make sure that the privacy policy of the software you are using is acceptable to you on how they share your personal information but this alone is not enough. Your identity as a consumer in the cloud is becoming a commodity. Identity for consumers is more than just your personal information (i.e. social security number). For consumers identity is now about reputation across feedback for auction items, your social feed & following friends and professional recommendations all have impact to your identity in the cloud.

PaaS

Transactional integrity across multiple SOA operations create audit issues. While providing interoperability an SOA is not transactional across disparate interfaces. Depending on the implementation of the interface it may not have transactions within it accross operations /methods even with a session existing.

IaaS

Here the issues & challenges for Corporate IT and R&D are intrinsic to how and from whom the machine image was made. Open sourced solutions vs specific vendor products on these images also have different issues in trust and assurance (Quality & Vulnerability) to the code you are expecting to be running as service to have an I&AM instance available to launch.

Solutions and Recommendations

Identity as a Service should follow the same best practice that an I&AM implementation does along with added considerations for Privacy, Integrity and Audit ability.


Solution Options

SaaS

Corporate IT has to review the options the cloud provider has to couple their network either through VPN or proprietary gateway device. The reduction of cost using the cloud needs to have the risk mitigated around the privacy considerations with having employee information stored remotely and how the cloud provider (e.g. encryption of data at rest) is managing that data.


R&D
teams need to bring into their SDLC the interactions with I&AM providers as part of their threat assessment. The specifics to this can be reviewed in the Application Security domain in regards to vulnerabilities.


Consumers
need to review the privacy policies of where they have their information. This however is not sufficient security as your identity (not in your personal information i.e. social security number) is also about now about your reputation. Your use of the cloud systems tie directly back to who you are and this continue as more systems are mapping identities and providing federated interface as such. It is important that information that you have in the cloud is understood that it ties back to you. For information that would affect your reputation should have the level of protection that you feel would equate to what you give out in the real world. It is vital that cloud providers understand this being more about privacy of information (not necessarily about identifying characteristics of an identity but in fact the information related and known to an identity as a person).

PaaS

Stay away from proprietary solutions in how any part of what you have broken out of your traditional I&AM enviornment. It is important to keep to standards for the components of I&AM that you are decoupling within your implementation and followed in practice by the cloud provider and used correctly. If standards are not yet adopted it is not a deterrent but should have more caution than if it has been generally available, adopted and has continued support e.g. XACML for authorization, XDAS for distributed auditing and SPML for provisioning.

IaaS

Images created by others need to have some support & maintenance around it. When open sourced the integrity of an instances has to be reviewed and should have some caution as the build may not be what you expect unless coming from a reputable organization that is willing to support it.

Recommendations

· Use the cloud to bring and remove the redundancy in resources without sacrificing existing practices.

· Keep all existing practices to I&AM in place with additional focus when moving your data off-site and/or decoupling the pillars of the solution into an SOA.

Questions for your Provider and Assessment Checklist

· Please provide any documentation you have outlining the security architecture of this solution covering web services security, authentication, audit trails, user-id timestamps etc .

· Please describe what protocols / options are available for single sign on.

· Please provide security administration manual or security portions of the system administration manual.

· Please describe user account and password controls and options.

· Please describe security reports available from the system. Please provide sample reports.

· Please provide us a copy of your privacy policy.

· What standards do you support?

· Do we have the option to have parts of the cloud private?

Future Outlook

The cloud provides a number of benefits with IDaaS becoming a maturing part of this revolution yet this specific market is still very early in its development. Some organizations, at the time of this writing, have staked their claim to managing identities in the cloud and externalizing the identities through SOA. They often provide both options to clound and non-cloud (traditional) so a viable solution can be put in place that meets your requirements. Cloud providers need to keep offering options between traditional Identity Management and Identity as a Service while bolstering the on ramp towards maturatity and across the chasm.


For consumers the lines between clouds and the reputation of your identity will continue to blur. The time may never come where your auction
’s negative feedback affects your ability to get a loan but these are the security issues to understand and continue address.


References

http://blogs.forrester.com/srm/2007/08/two-faces-of-id.html

http://www.aspeninstitute.org/publications/identity-age-cloud-computing-next-generation-internets-impact-business-governance-socia

http://blog.odysen.com/2009/06/security-and-identity-as-service-idaas.html



/*
Joe Stein
http://www.linkedin.com/in/charmalloc
*/

Sunday, August 23, 2009

NIST Updates Cloud Computing Definition (v15)

Computer scientists at NIST continue to develop their draft definition of cloud computing in collaboration with industry and government. They have been been posting their working definition of cloud computing that serves as a foundation for their upcoming publication on the topic. NIST’s role in cloud computing is to promote the effective and secure use of the technology within government and industry by providing technical guidance and promoting standards.

The changes between this draft and the previous are mostly terminology and language without any large structural changes. I believe that throughout the industry the base of cloud computing around the five essential characteristics, three service models, and four deployment models are beginning to hold steady.

http://csrc.nist.gov/groups/SNS/cloud-computing/cloud-def-v15.doc

Draft (V15) NIST Working Definition of Cloud Computing

Authors: Peter Mell and Tim Grance

8-19-09

National Institute of Standards and Technology, Information Technology Laboratory

Note 1: Cloud computing is still an evolving paradigm. Its definitions, use cases, underlying technologies, issues, risks, and benefits will be refined in a spirited debate by the public and private sectors. These definitions, attributes, and characteristics will evolve and change over time.

Note 2: The cloud computing industry represents a large ecosystem of many models, vendors, and market niches. This definition attempts to encompass all of the various cloud approaches.

Definition of Cloud Computing:

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is composed of five essential characteristics, three service models, and four deployment models.

Essential Characteristics:

On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service’s provider.

Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).

Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.

Rapid elasticity. Capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.

Measured Service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service.

Service Models:

Cloud Software as a Service (SaaS). The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

Cloud Platform as a Service (PaaS). The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.

Cloud Infrastructure as a Service (IaaS). The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).

Deployment Models:

Private cloud. The cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on premise or off premise.

Community cloud. The cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on premise or off premise.

Public cloud. The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.

Hybrid cloud. The cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds).


They also have a great presentation on "Effectively and Securely Using the Cloud Computing Paradigm v25" http://csrc.nist.gov/groups/SNS/cloud-computing/cloud-computing-v25.ppt

To learn more about NIST's cloud efforts http://csrc.nist.gov/groups/SNS/cloud-computing/

/*
Joe Stein
http://www.linkedin.com/in/charmalloc
*/

Sunday, August 9, 2009

Amazon Web Services - Elastic Compute Cloud (EC2)

So many folks know Amazon for their books and the oodles of other online e-commerce (buy and wait for it to get delivered) retail store. They also do a nice job (at least through my Roku) with on-demand movies and such. The skinny of this post is about Amazon's "Web Services" (AWS) focused on the "Elastic Compute Cloud" (EC2) http://aws.amazon.com/ec2/ product.

The service started a few years back but it has only been a year since they added the Elastic Block Store (EBS) http://aws.typepad.com/aws/2008/08/amazon-elastic.html which (in my opinion) makes this a truly viable multi-server computing solution.

Now, I have not yet utilized this service in production so I can not yet speak to that but so far I have spent some cycles on the development side and honestly I have to say I am not sure what I ever did without it.

There is a very small learning curve to get the management console moving along but the getting started guide is well put together http://docs.amazonwebservices.com/AWSEC2/latest/GettingStartedGuide/

Once you get the hang (as I say this is straight forward) of the console now you can go and find your machine images http://developer.amazonwebservices.com/connect/kbcategory.jspa?categoryID=171. These images are made by Amazon, Sun, IBM, Oracle and an entire community of folks that share images they have made.

Here are just a few:

Perl Web Starter
Fedora Core 8, 32-bit architecture, Perl, Mason, Apache 2.0, and MySQL.

Java Web Starter
Fedora Core 8, 32-bit architecture, Java 5 EE, Tomcat, Apache, and MySQL.

LAMP Web Starter
Fedora Core 8, 32-bit architecture, PHP5, Apache 2.2, and MySQL.

Ruby on Rails Web Starter
Fedora Core 8, 32-bit architecture, Ruby, Rails, RubyGems, Mongrel, and MySQL.

Amazon Public Images - Windows Server 2003 R2 With Authentication Services and SQL Server Express + IIS + ASP.NET (32bit)

Amazon Public Images - Windows Server 2003 R2 and SQL Server Express + IIS + ASP.NET (64bit)

Being a developer I like to have platforms ready to go for acomplishing what I need to get done. Having these "pre-packaged" enviornments that I can utilize (put simply) reduces cycle times and allows focus for the task at hand.

Amazon is not the only provider (just the only Infrastructure as a Service (IaaS) I have used).

Here are some other services:


Some open source projects (in case you happen to have your own data center with nothing to-do):

/*
Joe Stein
http://www.linkedin.com/in/charmalloc
*/

Saturday, August 8, 2009

Facebook RSS News Feed Reader

So recently I started using a RSS Reader, Google Reader as it happens to be. I like being able to pull together news, slashdot, some sports, etc but it left me still with the occasion for going to Facebook to look at my news feeds of friends. After a little digging I could not find anything that would allow me to have my Facebook News Feeds available in any RSS Feeds.

So, here now exists the Facebook RSS News Feed Reader http://apps.facebook.com/rssnewsfeedreader. This application allows you to view your Facebook News Feed from within your favorite RSS reader.

What is RSS you may ask? RSS stands for Really Simple Syndication and the specification is maintained here http://www.rssboard.org/rss-specification.

Here is a quick overview for how it is done.
First you need to get into the Facebook API a little bit. This is both from the how to setup a Facebook app and also understanding their streams.

- Facebook Getting Started http://developers.facebook.com/get_started.php?tab=tutorial
- Facebook Streams http://wiki.developers.facebook.com/index.php/Stream_%28FQL%29

I developed the application in PHP so here is a little more about how that part works. Basically you have to create a type of proxy so the RSS reader is going to connect to your PHP (or other language based) application which then has to internalize (based on parameters for lets say the session) being passed in.

From here you need to:
1) Setup the session for that user to Facebook in your application
2) Read the profile stream in your application based on the HTTP request
3) Create (by parsing the news feed stream) the RSS XML (make sure you set it up correctly [i.e. having a so each item is uniquely defined]).

Now these 3 steps MUST occur AFTER you have had the user follow these steps to give you the one time authenticator so you can have the infinite session. Navigating the programming parts was pretty straight forward once I got through this with little more understanding about how "Facebook Infinite Session Keys Are NOT Dead!".

To get the infinite session key, you have to go to the following URL, replacing YOUR_API_KEY with your Facebook app’s API key:
http://www.facebook.com/code_gen.php?v=1.0&api_key=YOUR_API_KEY. Once you click "Generate" you will get your one time code (show below as example).







With that in your PHP application you now do this:
$facebook = new Facebook($appapikey, $appsecret);//Create a new facebook object
$infinite_key_array = $facebook->api_client->auth_getSession($authtoken); //$authtoken is the value you got from the above one time step
$infinite_key_array['session_key'] has the value for the session. You can MUST store this (depending on your implementation along with the user id).

Now that you have done this this session id (and the person's user id) is all you need to continue. Now you can setup a session in your PHP app to Facebook and only require some paramaters to be passed in for the RSS feed.

e.g.
$facebook->api_client->user = $_GET["u"];
$facebook->api_client->session_key = $_GET["s"];
$facebook->api_client->expires = 0;

From here, read your stream.
$feed = $facebook->api_client->stream_get();

Loops through your posts from the $feed and create the RSS XML with the proper headers e.g.
header("Content-Type: application/xml; charset=ISO-8859-1");
header("Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0", false); // HTTP/1.1
header("Expires: Sat, 26 Jul 1997 05:00:00 GMT"); // Date in the past




/*
Joe Stein
http://www.linkedin.com/in/charmalloc
*/

Friday, July 31, 2009

Cloud Computing Momentum

Recently I have seen more oscillation occurring with cloud computing.

I have been noodling a bit for what might be catalyzing this... perhaps the economic tide pushing a new birth (perhaps better put rebirth) of an industry for folks to reshape themselves and keep/grow their business by leveraging their existing base products & services. Could this be the next bubble? Should we hold tight?

It was not so long ago (10+ years) when ASP meant Application Service Providing and Co-Los (Co-Location Facilities) were all the craze. Now we have re-branded them as SaaS (Software as a Service), PaaS (Platform as a Service) and IaaS (Infrastructure as a service) and with the marketing flair up a renewed movement to open standards and collaboration.

Amazon has helped leed the way in regards to this movement. They truly offer some compelling set of functionality that allows for a real "outsourcing" to services you would run internally off and into "the cloud" http://aws.amazon.com/.

It would be an insult (perhaps ignorant) of course to not mention Google. It would also take too much time to hit on the plethora of services. Most are not really that new just in of themselves re-branded. Remember UseNet ? Now marketed as Google Groups (fka DejaNews). It is hard sometimes with Google to draw the line between “the internet” and the “cloud computing” SaaS…but I digress and leave it up to the FTC.

Recently a number of hosting companies (e.g. RackSpace) that are well known have gone marketable cloud and are releasing open source tools to promote interfacing with them as such http://www.rackspacecloud.com/cloud_hosting_products/servers/api

So what is all this difference about? It is about metering... pay as you go and Web 2.0.

Here is great overview of Cloud Computing Use Cases http://www.scribd.com/doc/17929394/Cloud-Computing-Use-Cases-Whitepaper

NIST has only given a working definition thus far http://csrc.nist.gov/groups/SNS/cloud-computing/index.html at this point.

So I call it a phenomenon and let all things be cloud 100110001000100010011101111


/*
Joe Stein
http://www.linkedin.com/in/charmalloc
*/

Wednesday, June 24, 2009

I blog therefore I tweet

It is quite interesting watching how twitter has taken off. It is like the micro blog that anyone can do because it takes nothing more than a quick thought that may or may not be worth reading.

Many bloggers having had to create their own success by either facilitating interesting content or information that is at least more than a 100 characters now having more modes of media banned competition for their effort.

Now this phenomenon of social networking will be going down the route of “islands of social awareness” for each and every person the more they involve themselves with these systems.

There is no social hub solution (aggregating facebook, twitter, flicker, etc) into a single user interface (at least that I know of) but as we have seen applications like trillian come up because we all were crazy to have to use AOL, MSN & Yahoo I expect this to happen soon enough.

/*
Joe Stein
http://www.linkedin.com/in/charmalloc
*/

Sunday, May 3, 2009

Get your head out of the coulds, security is for everyone!

Not to along ago before this post in a RSA Conference far, far away a new group was launched “To promote the use of best practices for providing security assurance within Cloud Computing, and provide education on the uses of Cloud Computing to help secure all other forms of computing.” http://www.cloudsecurityalliance.org

Cloud computing has many labels and definitions but for the unknowing it is likely a label for something you already know and understand. For the purpose of this blog let us go by the guidance provided by this group.

Why you ask?

The reason the definition is taken from here is a fundamental premise not only when securing software interconnected & interoperable closed systems but required for stabilizing them.

It is imperative to have a common understanding of not only what each operating part of a software application’s useful purposes is, the interface(s) between them but also what the interfaces joining them in of itself is providing as a solution separate from the applications parts.

I commend this group’s approach and attempt to provide security in something almost impossible (for now) to draw a circle around (or any geometric shape for that matter).

This was taken from http://www.cloudsecurityalliance.org/guidance/csaguide.pdf


Principal Characteristics of Cloud Computing

Cloud services are based upon five principal characteristics that demonstrate their relation to, and differences from, traditional computing approaches:

1. Abstraction of Infrastructure The compute, network and storage infrastructure resources are abstracted from the application and information resources as a function of service delivery. Where and by what physical resource that data is processed, transmitted and stored on becomes largely opaque from the perspective of an application or services’ ability to deliver it. Infrastructure resources are generally pooled in order to deliver service regardless of the tenancy model employed – shared or dedicated. This abstraction is generally provided by means of high levels of virtualization at the chipset and operating system levels or enabled at the higher levels by heavily customized file systems, operating systems or communication protocols.

2. Resource Democratization The abstraction of infrastructure yields the notion of resource democratization – whether infrastructure, applications, or information – and provides the capability for pooled resources to be made available and accessible to anyone or anything authorized to utilize them using standardized methods for doing so.

3. Services Oriented Architecture As the abstraction of infrastructure from application and information yields well-defined and loosely-coupled resource democratization, the notion of utilizing these components in whole or part, alone or with integration, provides a services oriented architecture where resources may be accessed and utilized in a standard way. In this model, the focus is on the delivery of service and not the management of infrastructure.

4. Elasticity/Dynamism The on-demand model of Cloud provisioning coupled with high levels of automation, virtualization, and ubiquitous, reliable and high-speed connectivity provides for the capability to rapidly expand or contract resource allocation to service definition and requirements using a self-service model that scales to as-needed capacity. Since resources are pooled, better utilization and service levels can be achieved.

5. Utility Model of Consumption & Allocation The abstracted, democratized, service-oriented and elastic nature of Cloud combined with tight automation, orchestration, provisioning and self-service then allows for dynamic allocation of resources based on any number of governing input parameters. Given the visibility at an atomic level, the consumption of resources can then be used to provide an “all-you-can-eat” but “pay-by-the-bite” metered utility-cost and usage model. This facilitates greater cost efficiencies and scale as well as manageable and predictive costs.

Cloud Service Delivery Models

Three archetypal models and the derivative combinations thereof generally describe cloud service delivery. The three individual models are often referred to as the “SPI Model,” where “SPI” refers to Software, Platform and Infrastructure (as a service) respectively and are defined thusly (credit to Peter M. Mell, NIST).

1. Software as a Service (SaaS) The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure and accessible from various client devices through a thin client interface such as a Web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

2. Platform as a Service (PaaS) The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created applications using programming languages and tools supported by the provider (e.g., java, python, .Net). The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, or storage, but the consumer has control over the deployed applications and possibly application hosting environment configurations.

3. Infrastructure as a Service (IaaS) The capability provided to the consumer is to rent processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly select networking components (e.g., firewalls, load balancers).

/*
Joe Stein,
http://www.linkedin.com/in/charmalloc
*/