Chetan Solanki

Helpful to u, if u need it…..

OPENCV articles from a magazine

Posted by Chetan R. Solanki on October 3, 2015

Hello friends,

I just found some articles related to OPENCV from a magazine of 2007. Actually the articles are of the period when people were not much aware about the OPENCV. Hope so you might develop interest in it.

OPENCV Article 1

OPENCV Article 2

OPENCV Article 3

OPENCV Article 4

Posted in Uncategorized | Tagged: | Leave a Comment »

Drupal vs Joomla vs WordPress

Posted by Chetan R. Solanki on June 25, 2014

WordPressJoomla and Drupal are the three most popular content management systems (CMS) online. All three are open source and built on PHP + MySQL. All three vary significantly in terms of features, capability, flexibility and ease of use. Below, we’ll take a look at some of the advantages and disadvantages of each of these CMS solutions:

 

CMS Showdown

Drupal: Pros and Cons

Drupal is the granddaddy of CMS systems on this list – it was first released in early 2001. Like WordPress and Joomla, Drupal too is open-source and based on PHP-MySQL. Drupal is extremely powerful and developer-friendly, which has made it a popular choice for feature rich, data-intensive websites like Whitehouse.gov and Data.gov.uk.

Let’s consider a few pros and cons of Drupal:

Advantages of Drupal

  • Extremely Flexible: Want a simple blog with a static front page? Drupal can handle that. Want a powerful backend that can support hundreds of thousands of pages and millions of users every month? Sure, Drupal can do that as well. The software is powerful and flexible – little wonder why it’s a favorite among developers.
  • Developer Friendly: The basic Drupal installation is fairly bare-bones. Developers are encouraged to create their own solutions. While this doesn’t make it very friendly for lay users, it promises a range of possibilities for developers.
  • Strong SEO Capabilities: Drupal was designed from the ground-up to be search engine friendly.
  • Enterprise Friendly: Strong version control and ACL capabilities make Drupal the CMS of choice for enterprise customers. The software can also handle hundreds of thousands of pages of content with ease.
  • Stability: Drupal scales effortlessly and is stable even when serving thousands of users simultaneously.

Disadvantages of Drupal

  • Steep Learning Curve: Moving from WordPress to Drupal can feel like walking from your car into a Boeing 747 cockpit – everything is just so complicated! Unless you have strong coding capabilities and like to read tons of technical papers, you’ll find Drupal extremely difficult to use for regular use.
  • Lack of Free Plugins: Plugins in Drupal are called ‘modules’. Because of its enterprise-first roots, most good modules are not free.
  • Lack of Themes: A barebones Drupal installation looks like a desert after a drought. The lack of themes doesn’t make things any better. You will have to find a good designer if you want your website to look anything other than a sad relic from 2002 when using Drupal.

Recommended Use

Drupal is a full-fledged, enterprise grade CMS. It’s recommended for large projects where stability, scalability and power are prioritized over ease of use and aesthetics.

Joomla: Pros and Cons

Joomla is an open-source content management software forked from Mambo. It is one of the most popular CMS solutions in the world and boasts over 30m downloads to date. Joomla powers such noteworthy sites as Cloud.com, Linux.com, etc.

Advantages of Joomla

  • User-Friendly: Joomla isn’t WordPress, but it’s still relatively easy to use. Those new to publishing will find its UI polished, flexible and powerful, although there is still a slight learning curve involved in figuring everything out.
  • Strong Developer Community: Like WordPress, Joomla too has a strong developer community. The plugin library (called ‘extensions’ in Joomla) is large with a ton of free to use, open source plugins.
  • Extension Variability: Joomla extensions are divided into five categories – components, plugins, templates, modules and languages. Each of these differs in function, power and capability. Components, for example, work as ‘mini-apps’ that can change the Joomla installation altogether. Modules, on the other hand, add minor capabilities like dynamic content, RSS feeds, and search function to a web page.
  • Strong Content Management Capabilities: Unlike WordPress, Joomla was originally designed as an enterprise-grade CMS. This makes it far more capable at handling a large volume of articles than WordPress.

Disadvantages of Joomla

  • Some Learning Involved: You can’t jump right into a Joomla installation and start hammering out new posts if you’re not familiar with the software. The learning curve isn’t steep, but it can be enough to intimidate casual users.
  • Lacks SEO Capabilities: Making WordPress SEO friendly is as easy as installing a free plugin. With Joomla, you’ll need a ton of work to get to the same level of search engine friendliness. Unless you have the budget to hire a SEO expert, you might want to look at alternative solutions.
  • Limited ACL Support: ACL (Access Control List) refers to a list of permissions that can be granted to specific users for specific pages. ACL is a vital component of any enterprise-grade CMS solution. Joomla started supporting ACL only after version 1.6. ACL support is still limited in the stable v2.5.1 release, making it unsuitable for enterprise customers.

Recommended use

Joomla enables you to build a site with more structural stability and content than WordPress, and has a fairly intuitive interface. If you want a standard website with standard capabilities – a blog, a static/dynamic front-end, a forum, etc. then use Joomla. Joomla is also a good option for small to mid-tier e-commerce stores. If you want something more powerful for enterprise use, consider Drupal.

WordPress: Pros and Cons

New York Times, CNN, Forbes and Reuters – the list of WordPress.com clients reads like publishing dream-team. More than 68 million websites use WordPress, making it the world’s favorite blogging software. It is flexible enough to power fortune 500 company blogs as well as sporadically updated personal journals.

Below, we take a look at some of the advantages and disadvantages of using WordPress:

Advantages of WordPress

  • Multiple Authors: WordPress was built from the ground-up to accommodate multiple authors – a crucial feature for any serious publication.
  • Huge Plugin Library: WordPress’ is the poster-child of the open-source developer community, which has developed hundreds of thousands of plugins for it. There are few things WordPress can’t do with its extensive library of plugins.
  • User-Friendly: WordPress’ UI is easy to use and highly intuitive, even for first-time bloggers. You can drop a theme, add a few plugins, and start blogging within minutes.
  • Strong SEO Capabilities: With plugins like All in One SEO, you can start blogging straight away without worrying about on-page SEO.
  • Easy Customization: WordPress’ theming system is designed for easy-customization. Anyone with a little grasp of HTML and CSS can customize WordPress themes to fit his/her needs.
  • Flexibility: WordPress can be made to do virtually anything – run an e-commerce store, host a video site, serve as a portfolio or work as a company blog – thanks to plugins and customized themes.

 Disadvantages of WordPress

  • Security: As the category leading software with millions of installations, WordPress is often the target of hackers. The software itself isn’t very secure out of the box and you will have to install third-party plugins to boost your WordPress installation’s security.
  • Incompatibility with Older Plugins: The WordPress team constantly releases new updates to fix security loopholes and patch problems. These updates are often incompatible with older plugins. If your site relies on older plugins, you may have to hold off on updating (which makes your site all the more susceptible to hack attacks).
  • Limited Design Options: Even though WordPress is infinitely customizable, most WordPress installations still look like WordPress installations. Although recent updates and improvements in plugins/themes have rectified this problem somewhat, WordPress is still hampered by limited design options.
  • Limited Content Management Capabilities: WordPress was originally designed as a blogging platform. This has affected its ability to handle large amounts of content. If you plan to publish hundreds of blog posts per week (not uncommon for large publishers), you may find the default WordPress backend a little underwhelming for such high content volume.

Recommended Use

WordPress is often called a ‘mini CMS’. It isn’t nearly as powerful or capable as Drupal or Joomla, but is easy enough for any lay user. Use WordPress if you want a simple, easy to use blogging solution that looks good and can accommodate multiple authors easily.

Conclusion

Even though WordPress, Joomla and Drupal are built on the same technology stack, they vary heavily in features and capabilities. Hopefully, the above information will help you choose a CMS that fits your requirements.

Source: https://www.udemy.com

Posted in Uncategorized | Leave a Comment »

DEBIAN VS UBUNTU

Posted by Chetan R. Solanki on June 25, 2014

A wise man once compared Linux to vanilla ice cream. It’s pretty nice by itself, but if you add some flavors and toppings, it turns into something entirely else altogether. Debian and Ubuntu are just two of the many ‘flavors’ of Linux and count among the most popular Linux distributions around.

Debian and Ubuntu are both geared towards casual home users, though they can both accommodate hardcore programmers as well. The open-source community likes to posit them as worthy alternatives to Windows and OS X. In this article, we’ll see if this claim deserves any merit and tell you which Linux distribution – Debian or Ubuntu – deserves your time.

Debian Vs ubuntu

Debian: A Brief Introduction

When Debian was first announced in 1993, it was a one-man project helmed by Ian Murdock who was then a CS student at Purdue University. To give you an idea of the humble beginnings of the project, consider that the name ‘Debian’ itself is a portmanteau of ‘Debra’ – Ian’s then-girlfriend – and ‘Ian’. This would like Bill Gates calling Windows ‘Billinda’, after Bill and Melinda Gates.

The project was envisioned as a robust, open-source distribution of Linux with an emphasis on community-first development. With the release of the first 0.9x version in 1994, Murdock was able to raise significant interest in the distribution. Soon, the open-source community – which was still in its nascent stages – rallied around the distribution and helped turn it from a hobby into a robust, capable OS.

Many hardcore Debian users will tell you that Debian is as much an OS as it is a philosophy. While most Linux distros espouse the ‘free software’ philosophy, few embrace it as wholeheartedly as Debian. This can be seen in the Debian Social Contract, a document that lists the guidelines open-source developers must adhere to (‘ensuring that the OS remains open and free’ being the top item on the list).

Today, Debian is developed and nurtured by a strong community of passionate developers. There is no commercial organization at the helm; it is completely operated and maintained by the community. In a way, Debian is a demonstration of what collaborative creation can accomplish.

Ubuntu: A Brief Introduction

It’s difficult to describe the word ubuntu from which the Ubuntu OS takes its name. Of South African origin, ubuntu can roughly be translated to ‘humanness, human kind, and human spirit’. It has been appropriated to stand for ‘oneness’ – a philosophy Ubuntu and the open-source software movement subscribes to wholeheartedly.

Ubuntu is essentially a fork of Debian. For those of you who don’t speak geek, a fork is when a developer takes a copy of source code from one project and starts development on it independently, thereby creating something unique and distinct from the original. Unlike Debian, which is community powered, Ubuntu is developed by Canonical Ltd., a private company helmed by serial entrepreneur (and space tourist) Mark Shuttleworth.

Ubuntu was created with an express desire to make Linux more approachable to average users. As such, it has a slicker UI, better support for media, and an easier installation process. This can be seen in the respective websites for Ubuntu and Debian as well. Because of this user-friendliness, Ubuntu has quickly become the most widely used Linux distribution with an estimated 20 million users worldwide.

Now that we know how Ubuntu and Debian originated, let’s check out their features, pros and cons.

New to Linux? Master Linux command line with this course!

Debian and Ubuntu Compared

Before we begin, you must understand that Ubuntu is based off Debian, which itself is just another ‘flavor’ of Linux. Since Ubuntu shares much of its codebase with Debian, it is usually as fast, flexible and powerful as Debian. What Canonical – Ubuntu’s developer – basically does is add a bunch of extra features, a nicer interface (based on Unity, not GNOME – don’t worry if you don’t know what they are) and easier installation.

Thus, comparing Ubuntu to Debian is, in some ways, comparing a kernel of corn to an entire corn cob (Linux, of course, is the field where the corn is grown). They are similar in most ways, though still different in some.

With that out of the way, let’s get started.

Ease of Use

Take a quick look at Debian.org. Then head over to Ubuntu.com.

No points for guessing which distro is easier to use!

Debian is a developer-first Linux distribution. Although it is robust, secure and powerful, it isn’t exactly designed for those new to Linux. The developer community has worked tirelessly in the last few years to make the installation process and basic setup easier, but it is still leaps and bounds away from Ubuntu’s usability.

Ubuntu, on the other hand, is aimed at inexperienced first time users. This is reflected in Ubuntu’s company slogan as well – “Linux for human beings”.

Some Ubuntu features that make it easy to use for inexperienced users are:

  • Easy installation: You can download a copy of the Ubuntu installation image onto a CD or pen drive and start the installation right away. You can also try the software without installing. Lately, Debian has started supporting easy installation and live demo features as well.

  • Ubuntu software center: Installing new software on Linux is like battling a honey badger in a cage – never easy and seldom safe. The Ubuntu software center makes package installation somewhat easier, giving you access to popular tools and software with zero mucking around with sudo. Installing software on Debian, on the other hand, will still require you to fire up the command line.

  • Cloud storage: It’s nearly impossible to read any tech-related news and not chance upon the word ‘cloud’. Ubuntu includes its own cloud storage service, Ubuntu One, which makes it easy to move to the cloud – something utterly missing in Debian.

  • Media management: Ubuntu’s media (pictures, videos, music) management is very user-friendly – almost like OS X or Windows 7. This is a clear advantage for home and casual users who want to use Ubuntu as their primary computer.

Conclusion

If ease of use is a concern, go with Ubuntu – you won’t go wrong. Debian has its positive points, but approachability isn’t one of them. Power users and admins, however, will love Debian’s minimalism.

Hardware Compatibility

A common complaint among those switching from Windows or Mac to a Linux OS is hardware compatibility. Simply put, a lot of hardware just doesn’t work with many Linux distros. This problem is especially acute if you have rare, really old, or really new hardware.

Ubuntu’s developers have worked extensively to improve the OS’ hardware compatibility, and it shows. Ubuntu will recognize most current hardware and a bunch of old stuff too. This means you can start using Ubuntu right out of the box without searching for hard to find drivers.

Debian’s hardware compatibility, however, is a little sketchy. Debian released v7.0 of the OS (codenamed, ahem, Wheezy) in May 2013 which is more stable and secure than ever, but also utilizes a significantly outdated Linux 3.2 kernel (current version is up to v3.12). This can lead to many hardware incompatibilities, especially if you are using older (or very new) hardware.

Architecture Compatibility

Ubuntu is primarily meant to be used on desktop devices. Hence, it only supports hardware architecture commonly found in desktop computers – Intel x86, and Intel x64. Debian, on the other hand, will run equally smoothly on desktop computers (x86 or x64 architecture) to handheld devices (ARM architecture).

Here’s a full list of architecture supported by Debian 7 wheezy:

  • 32-bit PC (‘i386′)

  • SPARC (‘sparc’)

  • PowerPC (‘powerpc’)

  • MIPS (‘mips’ (big-endian) and ‘mipsel’ (little-endian))

  • Intel Itanium (‘ia64′)

  • S/390 (‘s390′)

  • 64-bit PC (‘amd64′)

  • ARM EABI (‘armel’)

  • ARMv7 (EABI hard-float ABI, ‘armhf’)

  • IBM System z (‘s390x’)

Conclusion

Ubuntu boasts better hardware support, though you may still have to search for specific drivers for some hardware devices. Debian, on the other hand, has better support for different hardware architecture.

Running Linux on a server? This course will teach you how!

Software Availability

Another reason why many Linux still lags behind Windows in adoption rates is the lack of compatible software. If you absolutely must use MS Office, or love to play games on your computer, you won’t find much to love on Linux.

Among Linux distros, Ubuntu supports the widest variety and volume of software. You won’t find MS Office or Photoshop, but you will still have the option to pick from hundreds of worthy alternatives. Some popular software alternatives in Ubuntu are:

  • LibreOffice: Alternative to MS Office. Works with Word, Excel and PowerPoint files.

  • VLC Player: Powerful media player that can run most media files without additional codecs.

  • GIMP: GIMP is a highly capable, free alternative to Photoshop.

  • Chrome and Firefox: You won’t miss much on Ubuntu when it comes to web browsers. The only major browser missing from Ubuntu is IE, although there’s little chance of anyone ruing its absence.

  • Steam: Gaming has always been a bone of contention between Windows and Linux users. The recent addition of Steam support in Ubuntu should quell some concerns for Linux users.

A lot of software that works on Ubuntu also works on Debian, though this isn’t always true. Overall, the Debian software library is poorer than Ubuntu’s. That said, most of the popular Linux software – GIMP, LibreOffice, VLC, Firefox – will work as well on Debian as on Ubuntu.

Conclusion

Ubuntu boasts a larger software library than Debian, making it even more attractive for casual users.

Stability

As far as stability is concerned, a stable release (see how a ‘testing’ release becomes a ‘stable’ release) of Debian is more robust than the rock of Gibraltar. You can say goodbye to blue screens and random system crashes; Debian will almost never go down – which is why it is so popular for enterprise-grade applications and web hosting.

Ubuntu, since it shares Debian’s codebase, also shares its stability, though the additional features have certainly made it slightly prone to crashes and bugs.

Conclusion

Both Ubuntu and Debian are way more stable than any Windows based OS. That said, Debian, because of its minimalistic features, beats Ubuntu on the stability scale hands down.

Performance

Both Ubuntu and Debian are significantly faster than comparable Windows operating systems. Debian is especially fast since it doesn’t come bundled with a bunch of performance degrading features and pre-installed software.

Ubuntu is faster than Windows, though the added features affect the performance when compared with Debian. Expect both Ubuntu and Debian to slow down over time as feature-bloat keeps piling up, though you’ll be hard-pressed to find an Ubuntu or Debian machine running slower than an equivalent Mac or Windows.

Conclusion

Both Debian and Ubuntu boast significantly better performance than Windows; you can’t go wrong in choosing either of them, though Debian has the slight edge.

Community and Support

The best thing about using open-source software is the sense of community it engenders. Both Ubuntu and Debian have strong, active developer communities, though Debian, since it is basically built by volunteer developers, has a much larger community.

Debian’s community tends to be more technically oriented. Ubuntu’s community, on the other hand, is more welcoming to newcomers and beginners.

If you’re willing to pay (a blasphemous word in open-source circles), you can get access to expert support directly from Canonical Ltd. for a fee. With Debian, you just have to rely on community forms.

Conclusion

Debian’s community is large and vibrant, though Ubuntu’s is more newbie friendly.

Releases

Debian typically has three release-types in the works. These are:

  • Stable: This is the stable, ready to deploy version you can use on your desktops and servers.

  • Testing: This is the version undergoing testing before it can become stable.

  • Unstable: This is the shaky, under-trial version mostly used by developers to tinker with the code.

Ubuntu, on the other hand, offers two OS versions:

  • Ubuntu LTS: LTS stands for Long Term Support. Ubuntu typically receives updates every six months. The LTS version, however, is released every 2 years. This means that the LTS version has outdated software and hardware drivers, but is also significantly more stable. The current LTS version is 12.04.

  • Ubuntu non-LTS: Also called a ‘standard release’, these are released every six months and feature the latest software. The current non-LTS version of Ubuntu is 13.10.

Ubuntu non-LTS releases are basically built upon unstable versions of Debian. The folks at Canonical take the best from Debian unstable, improve it, add the latest software and hardware drivers, and release it to the general public in the form of a standard release.

Conclusion

Ubuntu releases updates every six months; Debian’s stable releases are more sporadic, though you can always play around with the testing version. Ubuntu’s faster release cycle means that Ubuntu usually includes the latest software.

The Verdict

Debian and Ubuntu are meant for different users. Ubuntu is aimed at inexperienced users new to Linux, while Debian is a minimalist, no-frills OS created for developers, tinkerers and open-source enthusiast. If you’ve never used Linux before, we recommend going with Ubuntu. If you’re already familiar with Linux, give Debian a shot – you won’t be disappointed.

Source: https://www.udemy.com

Posted in Uncategorized | Leave a Comment »

How to create a local Ubuntu repository (update/upgrade distros ‘locally’)?????

Posted by Chetan R. Solanki on June 25, 2014

Following are the steps to create a local repository for Ubuntu:

The whole process is composed by the following two stages :

I) Setting up the local repository server- ‘reposerver’.

II) Setting up other machines/clients to use our server as the repository source.

STAGE I:

I) Setting up the local repository server: reposerver.

We have to keep a machine dedicated for the local repository , lets call this machine as ‘reposerver‘ The main requirement for reposerver is the ‘disk space’, 100GB is recommended.

Here, I have created a separate partition for /var (and mounted it under /media) with 100 GB in size and assigned this space for storing the repo packages. You can also use an external storage device for this purpose. Once the disk space is ready, we need to install two packages on reposerver

apt-mirror , apache2

apt-mirror:

‘apt-mirror’ can easily create a mirror of repository from the Ubuntu server on our local machine(reposerver). It is a perl-based utility for downloading and mirroring the entire packages of a public Ubuntu repository. To install apt-mirror,

sudo apt-get install apt-mirror
Open the configuration file for apt-mirror

sudo gedit /etc/apt/mirror.list
You will find the below sample configuration file.

############# config ##################
#
# set base_path /var/spool/apt-mirror
#
# if you change the base path you must create the directories below with write privileges
#
# set mirror_path $base_path/mirror
# set skel_path $base_path/skel
# set var_path $base_path/var
# set cleanscript $var_path/clean.sh
# set defaultarch <running host architecture>
# set postmirror_script $var_path/postmirror.sh
set run_postmirror 0
set nthreads 20
set _tilde 0
#
############# end config ##############
deb http://archive.ubuntu.com/ubuntu karmic main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu karmic-security main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu karmic-updates main restricted universe multiverse
#deb http://archive.ubuntu.com/ubuntu karmic-proposed main restricted universe multiverse
#deb http://archive.ubuntu.com/ubuntu karmic-backports main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu karmic main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu karmic-security main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu karmic-updates main restricted universe multiverse
#deb-src http://archive.ubuntu.com/ubuntu karmic-proposed main restricted universe multiverse
#deb-src http://archive.ubuntu.com/ubuntu karmic-backports main restricted universe multiverse
clean http://archive.ubuntu.com/ubuntu

All our machines are now installed with Ubuntu karmic Koala 9.10. We are now going to mirror the repository for the latest release Lucid Lynx so that we can use this to update/upgrade all machines. We need to replace ‘karmic’ with ‘lucid’ on mirror.list. You can use the ‘replace all’ option on gedit. The modified mirror.list file will looks like below.

############# config ##################
#
set base_path /var/spool/apt-mirror
#
#set mirror_path $base_path/mirror
#set skel_path $base_path/skel
#set var_path $base_path/var
#set cleanscript $var_path/clean.sh
#set defaultarch <running host architecture>
#set postmirror_script $var_path/postmirror.sh
set run_postmirror 0
set nthreads 20
set _tilde 0
#
############# end config ##############
deb http://archive.ubuntu.com/ubuntu lucid main restricted
deb http://archive.ubuntu.com/ubuntu lucid-updates main restricted
deb http://archive.ubuntu.com/ubuntu lucid universe
deb http://archive.ubuntu.com/ubuntu lucid-updates universe
deb http://archive.ubuntu.com/ubuntu lucid multiverse
deb http://archive.ubuntu.com/ubuntu lucid-updates multiverse
deb http://archive.ubuntu.com/ubuntu lucid-backports main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu lucid-backports main restricted universe multiverse
deb http://archive.canonical.com/ubuntu lucid partner
deb-src http://archive.canonical.com/ubuntu lucid partner
deb http://security.ubuntu.com/ubuntu lucid-security main restricted
deb http://security.ubuntu.com/ubuntu lucid-security universe
deb http://security.ubuntu.com/ubuntu lucid-security multiverse
clean http://archive.ubuntu.com/ubuntu

Once you are done with the configuration, start mirroring by issuing the below command

apt-mirror
Mirroring will now begin:

Downloading 146 index files using 20 threads…
Begin time: Sat May 15 16:08:15 2010
[20]… [19]… [18]… [17]… [16]… [15]… [14]… [13]… [12]… [11]… [10]… [9]… [8]… [7]… [6]… [5]… [4]… [3]… [2]… [1]… [0]…
End time: Sat May 15 16:08:22 2010

Proceed indexes: [SSPPPPPPPPPPP]…..
It will show the total size of packages that is going to be downloaded.

Check whether the process is running properly

adminsage@adminsage-desktop:~$ ps aux | grep wget
root 3267 0.0 0.0 5392 1704 pts/2 S+ 15:19 0:00 wget –limit-rate=100m -t 0 -r -N -l inf -o /media/var/spool/apt-mirror/var/archive-log.1 -i /media/var/spool/apt-mirror/var/archive-urls.1
root 3271 0.0 0.0 5392 1724 pts/2 S+ 15:19 0:00 wget –limit-rate=100m -t 0 -r -N -l inf -o /media/var/spool/apt-mirror/var/archive-log.5 -i /media/var/spool/apt-mirror/var/archive-urls.5
root 3272 0.0 0.0 5392 1720 pts/2 S+ 15:19 0:00 wget –limit-rate=100m -t 0 -r -N -l inf -o /media/var/spool/apt-mirror/var/archive-log.6 -i /media/var/spool/apt-mirror/var/archive-urls.6
root 3273 0.0 0.0 5392 1724 pts/2 S+ 15:19 0:00 wget –limit-rate=100m -t 0 -r -N -l inf -o /media/var/spool/apt-mirror/var/archive-log.7 -i /media/var/spool/apt-mirror/var/archive-urls.7
root 3277 0.0 0.0 5392 1700 pts/2 S+ 15:19 0:00 wget –limit-rate=100m -t 0 -r -N -l inf -o /media/var/spool/apt-mirror/var/archive-log.11 -i /media/var/spool/apt-mirror/var/archive-urls.11
root 3278 0.0 0.0 5392 1704 pts/2 S+ 15:19 0:00 wget –limit-rate=100m -t 0 -r -N -l inf -o /media/var/spool/apt-mirror/var/archive-log.12 -i /media/var/spool/apt-mirror/var/archive-urls.12
root 3279 0.0 0.0 5392 1724 pts/2 S+ 15:19 0:00 wget –limit-rate=100m -t 0 -r -N -l inf -o /media/var/spool/apt-mirror/var/archive-log.13 -i /media/var/spool/apt-mirror/var/archive-urls.13
We can see several threads of wget running simultaneously. Once the download is complete, you will see a message like below.

[20]… [19]… [18]… [17]… [16]… [15]… [14]… [13]… [12]… [11]… [10]… [9]… [8]… [7]… [6]… [5]… [4]… [3]… [2]… [1]… [0]…

End time: Sat May 16 18:01:03 2010
20MB in 50092 files and 384 directories can be freed.
Run /media/var/spool/apt-mirror/var/clean.sh for this purpose.

We are now done with the ‘mirroring’ of repository and all packages are now available on your machine 🙂 Now we need to make this repo available to all other machines on the network via http. Install apache for this purpose.

sudo apt-get install apache2
Create Symbolic links

sudo ln -s /media/var/spool/apt-mirror/mirror/archive.ubuntu.com/ubuntu/ /var/www/ubuntu
sudo ln -s /media/var/spool/apt-mirror/mirror/archive.canonical.com/ /var/www/canonical
Try accessing the repo locally from http://localhost/ubuntu and http://localhost/canonical

We have now successfully created the ‘reposerver’ and completed the stage1 of the whole process.

 

STAGE 2:

II) Setting up other machines/clients to use our server(reposerver) as the source repository .

Note: The following steps must be done on the machine that we wish to update/upgrade using the ‘reposerver’, our local repository server.

Backup your current /etc/apt/sources.list

cp -av /etc/apt/sources.list /etc/apt/sources.list_backup
Modify sources.list to use our ‘reposerver’ as the repository. Consider the IP of ‘reposerver’ as 192.168.1.52

sudo gedit /etc/apt/sources.list
Replace the URLs http://archive.ubuntu.com/ubuntu/dists/lucid/Release with http://ip_of_reposerver/ubuntu/dists/lucid/Release and here it is 192.168.1.52

Use replace all option on gedit and replace like what is mentioned below.

Replace http://in.archive.ubuntu.com with http://192.168.1.52

http://archive.canonical.com with http://192.168.1.52

http://security.ubuntu.com with http://192.168.1.52.

The sample sources.list will be like below.

# deb cdrom:[Ubuntu 9.10 _lucid Koala_ – Release i386 (20091028.5)]/ lucid main restricted
# See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to
# newer versions of the distribution.
deb http://192.168.1.52/ubuntu lucid main restricted
deb-src http://192.168.1.52/ubuntu lucid main restricted
## Major bug fix updates produced after the final release of the
## distribution.
deb http://192.168.1.52/ubuntu lucid-updates main restricted
deb-src http://192.168.1.52/ubuntu lucid-updates main restricted
## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team. Also, please note that software in universe WILL NOT receive any
## review or updates from the Ubuntu security team.
deb http://192.168.1.52/ubuntu lucid universe
deb-src http://192.168.1.52/ubuntu lucid universe
deb http://192.168.1.52/ubuntu lucid-updates universe
deb-src http://192.168.1.52/ubuntu lucid-updates universe
## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team, and may not be under a free licence. Please satisfy yourself as to
## your rights to use the software. Also, please note that software in
## multiverse WILL NOT receive any review or updates from the Ubuntu
## security team.
deb http://192.168.1.52/ubuntu lucid multiverse
deb-src http://192.168.1.52/ubuntu lucid multiverse
deb http://192.168.1.52/ubuntu lucid-updates multiverse
deb-src http://192.168.1.52/ubuntu lucid-updates multiverse
## Uncomment the following two lines to add software from the ‘backports’
## repository.
## N.B. software from this repository may not have been tested as
## extensively as that contained in the main release, although it includes
## newer versions of some applications which may provide useful features.
## Also, please note that software in backports WILL NOT receive any review
## or updates from the Ubuntu security team.
deb http://192.168.1.52/ubuntu lucid-backports main restricted universe multiverse
deb-src http://192.168.1.52/ubuntu lucid-backports main restricted universe multiverse
## Uncomment the following two lines to add software from Canonical’s
## ‘partner’ repository.
## This software is not part of Ubuntu, but is offered by Canonical and the
## respective vendors as a service to Ubuntu users.
deb http://192.168.1.52/canonical/ubuntu lucid partner
deb-src http://192.168.1.52/canonical/ubuntu lucid partner
deb http://192.168.1.52/ubuntu lucid-security main restricted
deb-src http://192.168.1.52/ubuntu lucid-security main restricted
deb http://192.168.1.52/ubuntu lucid-security universe
deb-src http://192.168.1.52/ubuntu lucid-security universe
deb http://192.168.1.52/ubuntu lucid-security multiverse
deb http://192.168.1.52/ubuntu lucid main universe restricted multiverse
deb-src http://192.168.1.52/ubuntu lucid-security multiverse
Update the repo by issuing the command

sudo apt-get update
You will not be able to install and update all packages from our ‘reposerver’.

But this isn’t enough if you wish to do a complete dist upgrade.The dist upgrade reads certain files and we need to modify these files to have an error free upgrade.

Dist-Upgradation:

Go back to our server (reposerver) and download the following packages under /var/www/

wget -c http://changelogs.ubuntu.com/meta-release
wget -c http://changelogs.ubuntu.com/meta-release-lts
Create the directory:

sudo mkdir -p /media/var/spool/apt-mirror/mirror/archive.ubuntu.com/ubuntu/dists/lucid/main/dist-upgrader-all/current
change directory to /media/var/spool/apt-mirror/mirror/archive.ubuntu.com/ubuntu/dists/lucid/main/dist-upgrader-all/current

cd /media/var/spool/apt-mirror/mirror/archive.ubuntu.com/ubuntu/dists/lucid/main/dist-upgrader-all/current
And download the below packages under on the current directory, ‘/media/var/spool/apt-mirror/mirror/archive.ubuntu.com/ubuntu/dists/lucid/main/dist-upgrader-all/current’

wget -c http://archive.ubuntu.com/ubuntu/dists/lucid/Release
wget -c http://archive.ubuntu.com/ubuntu/dists/lucid/main/dist-upgrader-all/current/ReleaseAnnouncement
wget -c http://archive.ubuntu.com/ubuntu/dists/lucid/main/dist-upgrader-all/current/ReleaseAnnouncement
wget -c http://archive.ubuntu.com/ubuntu/dists/lucid/main/dist-upgrader-all/current/lucid.tar.gz
wget -c http://archive.ubuntu.com/ubuntu/dists/lucid/main/dist-upgrader-all/current/lucid.tar.gz.gpg
You are done with the server side modifications. Now go back to your client machine for which the distribution has to be upgraded. Modify the below files to point the URLs to our ‘reposerver’

sudo gedit /etc/update-manager/meta-release
sudo gedit /usr/share/update-manager/mirrors.cfg
Sample files

/usr/share/update-manager/mirrors.cfg

#ubuntu

http://192.168.1.52/ubuntu/

http://192.168.1.52/ubuntu/

ftp://192.168.1.52/ubuntu/
ftp://192.168.1.52/ubuntu/
mirror://launchpad.net/ubuntu/+countrymirrors-archive

http://ports.ubuntu.com/

ftp://ports.ubuntu.com/

http://ports.ubuntu.com/ubuntu-ports/

ftp://ports.ubuntu.com/ubuntu-ports/

http://old-releases.ubuntu.com/

ftp://old-releases.ubuntu.com/
Modify the URL at /etc/update-manager/meta-release

# default location for the meta-release file

[METARELEASE]
URI = http://192.168.1.52/meta-release
URI_LTS = http://192.168.1.52/meta-release-lts
URI_UNSTABLE_POSTFIX = -development
URI_PROPOSED_POSTFIX = -proposed
Update the repository

sudo apt-get update
To upgrade the distribution, issue the below command.

sudo do-release-upgrade
The machine will now automatically detects the new release and will ask confirmation for an upgrade. Everything is now local and we have saved huge bandwidth and ‘TIME’ 🙂

Note: Run apt-mirror at regular intervals using cron so that the packages will always be update and hence synced with the Ubuntu server.

Posted in Uncategorized | Leave a Comment »

Android App Emulator for Windows

Posted by Chetan R. Solanki on April 14, 2013

An emulator is used to test an application in a simulated environment to see if it can perform really well in the actual device. Besides testing out an app, it also lets you try out new applications launched on Google Play.

Third-party Android emulators run the latest games or allow budding developers to experiment with their brand new app. There are quite a few applications that allow users to run latest software and games effortlessly. Here’s a rundown of some of the best app players:

1. BlueStacks

BlueStacks is one of the top emulators that allow you to try out new mobile apps on your PC. It runs old as well as new apps launched on Google Play. You can try out your own apps, but make sure it has an “.apk” extension. All you have to do is right-click on your apk file and select the app player to test it out.

BlueStacksThere was a time when BlueStacks could run only a handful of apps, but new enhancements have now turned it into a complete emulator. You can now run almost any application or game effortlessly. However, you may experience slow frame-rates while playing 3D games. The app player has a nifty search option where you can type in the app name, click on the search result and wait for the app to install completely.

The app player also lets you tweak and manage installed apps. You can change the app size (tablet, large phone or default), uninstall apps or add an onscreen keyboard. There’s also an option to sync your device with this app player for smooth app synchronization between your phone/tablet and PC.

The best part of the application player is its interface. I found it very simple and clutter-free. I could easily find my installed apps via a menu enlisting all applications horizontally. There’s also a vertical sidebar that suggests new apps for us to download. BlueStacks is still in beta mode and is available free, but it may turn into a paid app in the future.

2. Jar of Beans

Jar of Beans is a new user-developed emulator that’s doing the rounds at the moment. Developed by “UnrealManu”, the app player runs all apps and games supported by Android Jelly Bean devices. The best part of this app is that games that require hardware graphics acceleration run smoothly on this emulator.

Jar of Beans

Jar of Beans is second best to BlueStacks when it comes to emulating apps and games. It can play games on Full-screen and can automatically switch to “tablet mode” for Android tablet-supported games. There are plenty of settings which can help you customize your gameplay and app usage experience. It offers a wide range of configuration options, including different viewing modes, a virtual SD card and keyboard support. In future versions, the emulator will boast multiple resolutions and skins.

Jar of Beans can also double up as an app-testing sandbox. You can easily test your apps without requiring a real device. You can create your apk files and install it to your emulator. It has a special button that lets you to import your apk files stored on your computer. Since it allows custom settings, you can easily tweak options according to your app preferences and can even create a virtual SD card of any size you want. A must-download Android emulator for your PC!

3. AMD AppZone

AppZone is an emulator Android gamers have been waiting for! Powered by BlueStacks, this free app player is exclusively developed for players who would love to play 3D Android games on their PCs. However, users will need an AMD powered-PC to play games on this emulator.

AMD AppZone

The best part of AMD AppZone is that it can run games in full-screen. The AMD website also has select games that you can install and play on your AMD-powered emulator. Not only games, but also some of the top productivity apps and utilities are listed in the AppZone application page. Like BlueStacks, the app player allows for easy synchronization between apps on your device and computer.

AppZone can be used to try out some of the top Android games. The app player is not suitable for application testing. If you are looking for a complete mobile gaming experience on your laptop, then you should try this out.

4. YouWave

YouWave supports apps built for Android 2.3 devices and performs exactly like your phone or tablet. It can rotate the screen, play multiplayer games and has simulated memory card functionality, allowing users to save their game in the same way they would do in an actual device.

This Android app emulator for Windows has an in-built application store that lets you search for a game or software and download it to your PC. However, there are some restrictions (read: flaws) that won’t allow downloading of some apps. The app player can’t play games like Angry Birds, because of the unsupported hardware. Angry Birds have ARM native code and run on devices powered by ARM processors. YouWave does not support this, nor does it support applications that require hardware sensors.

 

I would recommend BlueStacks if you want a complete Android emulation experience.

Posted in Uncategorized | Tagged: , , , , | Leave a Comment »

How to run CUDA programs without GPU???

Posted by Chetan R. Solanki on February 27, 2013

Development without GPU

So you want to run the code on your machine but you don’t have a GPU? Or maybe you want to try things out before firing up your AWS instance? Here I show you a way to run the CUDA code without a GPU.

Note: this only works on Linux, maybe there are other alternatives for Mac or Windows.

Ocelot lets you run CUDA programs on NVIDIA GPUs, AMD GPUs and x86-CPUs without recompilation. Here we’ll take advantage of the latter to run our code using our CPU.

Dependencies

You’ll need to install the following packages:

  • C++ Compiler (GCC)
  • Lex Lexer Generator (Flex)
  • YACC Parser Generator (Bison)
  • SCons

And these libraries:

  • boost_system
  • boost_filesystem
  • boost_serialization
  • GLEW (optional for GL interop)
  • GL (for NVIDIA GPU Devices)

With Arch Linux, this should go something like this:

pacman -S gcc flex bison scons boost glew

On Ubuntu it should be similar (sudo apt-get install flex bison g++ scons libboost-all-dev). If you don’t know the name of a package, search for it with ‘apt-cache search package_name’.

You should probably install LLVM too, it’s not mandatory, but I think it runs faster with LLVM.

pacman -S llvm clang

And of course you’ll need to install CUDA and the OpenCL headers. You can do it manually or using your distro’s package manager (for ubuntu I belive the package is called nvidia-cuda-toolkit):

pacman -S cuda libcl opencl-nvidia

One last dependency is Hydrazine. Fetch the source code:

svn checkout http://hydrazine.googlecode.com/svn/trunk/ hydrazine

Or if you’re like me and prefer Git:

git svn clone -s http://hydrazine.googlecode.com/svn/ hydrazine

And install it like this (you might need to install automake if you don’t have it already):

cd hydrazine
libtoolize
aclocal
autoconf
automake --add-missing
./configure
sudo make install

Installation

Now we can finally install Ocelot. This is where it gets a bit messy. Fetch the Ocelot source code:

svn checkout http://gpuocelot.googlecode.com/svn/trunk/ gpuocelot

Or with Git (warning, this will take a while, the whole repo is about 1.9 GB):

git svn clone -s http://gpuocelot.googlecode.com/svn/ gpuocelot

Now go to the ocelot directory:

cd gpuocelot/ocelot

And install Ocelot with:

sudo ./build.py --install

Troubleshooting

Sadly, the last command probably failed. This is how I fixed the problems.

Hydrazine headers not found

You could fix this adding an include flag. I just added a logical link to the hydrazine code we downloaded previously:

ln -s /path/to/hydrazine/hydrazine

Make sure you link to the second hydrazine directory (inside this directory you’ll find directories like implementation and interface). You should do this in the ocelot directory where you’re running the build.py script (gpuocelot/ocelot).

LLVM header file not found

For any error that looks like this:

llvm/Target/TargetData.h: No such file or directory

Just edit the source code and replace it with this header:

llvm/DataLayout.h

The LLVM project moved the file.

PTXLexer errors

The next problem I ran into was:

.release_build/ocelot/ptxgrammar.hpp:351:14:error:'PTXLexer' is not a member of 'parser'

Go ahead, open the ‘.release_build/ocelot/ptxgrammar.hpp’ file and just comment line 355:

/* int yyparse (parser::PTXLexer& lexer, parser::PTXParser::State& state); */

That should fix the error.

boost libraries not found

On up-to-date Arch Linux boxes, it will complain about not finding boost libraries ‘boost_system-mt’, ‘boost_filesystem-mt’, ‘boost_thread-mt’.

I had to edit two files:

  • scripts/build_environment.py
  • SConscript

And just remove the trailing -mt from the library names:

  • boost_system
  • boost_filesystem
  • boost_thread

Finish the installation

After those fixes everything should work.

Whew! That wasn’t fun. Hopefully with the help of this guide it won’t be too painful.

To finish the installation, run:

sudo ldconfig

And you can check the library was installed correctly running:

OcelotConfig -l

It should return -locelot. If it didn’t, check your LD_LIBRARY_PATH. On my machine, Ocelot was installed under /usr/local/lib so I just added this to my LD_LIBRARY_PATH:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib

Here’s the link to the installation instructions.

Running the code with Ocelot

We’re finally ready enjoy the fruits of our hard work. We need to do two things:

Ocelot configuration file

Add a file called configure.ocelot to your project (in the same directory as our Makefile and student_func.cu files), and copy this:

{
    ocelot: "ocelot",
    trace: {
        database: "traces/database.trace",
        memoryChecker: {
            enabled: false,
            checkInitialization: false
        },
        raceDetector: {
            enabled: false,
            ignoreIrrelevantWrites: false
        },
        debugger: {
            enabled: false,
            kernelFilter: "",
            alwaysAttach: true
        }
    },
    cuda: {
        implementation: "CudaRuntime",
        tracePath: "trace/CudaAPI.trace"
    },
    executive: {
        devices: [llvm],
        preferredISA: nvidia,
        optimizationLevel: full,
        defaultDeviceID: 0,
        asynchronousKernelLaunch: True,
        port: 2011,
        host: "127.0.0.1",
        workerThreadLimit: 8,
        warpSize: 16
    },
    optimizations: {
        subkernelSize: 10000,
    }
}

You can check this guide for more information about these settings.

Compile with the Ocelot library

And lastly, a small change to our Makefile. Append this to the GCC_OPTS:

GCC_OPTS=-O3 -Wall -Wextra -m64 `OcelotConfig -l`

And change the student target so it uses g++ and not nvcc:

student: compare main.o student_func.o Makefile
    g++ -o hw main.o student_func.o -L $(OPENCV_LIBPATH) $(OPENCV_LIBS) $(GCC_OPTS)

I just replaced ‘nvcc’ with ‘g++’ and ‘NVCC_OPTS’ with ‘GCC_OPTS’.

make clean
make

And that’s it!

forked the github repo and added these changes in case you want to take a look.

I found this guide helpful, it might have some additional details for installing things under ubuntu and/or manually.

Note for debian users

I successfully installed ocelot under debian squeeze, following the above steps, except that I needed to download llvm from upstream, as indicated in the above guide for ubuntu.

Other than that, after fixing some includes as indicated (Replacing ‘TargetData.h’ by ‘IR/DataLayout.h’, or adding ‘/IR/’ to some includes), it just compiled.

To build the student project, I needed to replace -m64 by -m32 to fit my architecture, and to make the other indicated changes.

Here are my makefile diffs:

$ git diff Makefile
diff --git a/HW1/student/Makefile b/HW1/student/Makefile
index b6df3a4..55480af 100755
--- a/HW1/student/Makefile
+++ b/HW1/student/Makefile
@@ -22,7 +22,8 @@ OPENCV_INCLUDEPATH=/usr/include

 OPENCV_LIBS=-lopencv_core -lopencv_imgproc -lopencv_highgui

-CUDA_INCLUDEPATH=/usr/local/cuda-5.0/include
+#CUDA_INCLUDEPATH=/usr/local/cuda-5.0/include
+CUDA_INCLUDEPATH=/usr/local/cuda/include

 ######################################################
 # On Macs the default install locations are below    #
@@ -36,12 +37,12 @@ CUDA_INCLUDEPATH=/usr/local/cuda-5.0/include
 #CUDA_INCLUDEPATH=/usr/local/cuda/include
 #CUDA_LIBPATH=/usr/local/cuda/lib

-NVCC_OPTS=-O3 -arch=sm_20 -Xcompiler -Wall -Xcompiler -Wextra -m64
+NVCC_OPTS=-O3 -arch=sm_20 -Xcompiler -Wall -Xcompiler -Wextra -m32

-GCC_OPTS=-O3 -Wall -Wextra -m64
+GCC_OPTS=-O3 -Wall -Wextra -m32 `OcelotConfig -l` -I /usr/include/i386-linux-gn

 student: compare main.o student_func.o Makefile
-       $(NVCC) -o hw main.o student_func.o -L $(OPENCV_LIBPATH) $(OPENCV_LIBS) 
+       g++ -o hw main.o student_func.o -L $(OPENCV_LIBPATH) $(OPENCV_LIBS) $(GC

 main.o: main.cpp timer.h utils.h HW1.cpp
        g++ -c main.cpp $(GCC_OPTS) -I $(CUDA_INCLUDEPATH) -I $(OPENCV_LIBPATH)
$

I’m using cuda toolkit 4.2.

I don’t know why, but it was necessary to add /usr/lib/gcc/i486-linux-gnu/4.4 to the PATH for nvcc to work:

export PATH=$PATH:/usr/lib/gcc/i486-linux-gnu/4.4

Eclipse CUDA plugin

This is probably for another entry, but I used this guide to integrate CUDA into Eclipse Indigo.

The plugin is University of Bayreuth’s Eclipse Toolchain for CUDA compiler

Good luck!

Posted in Parallel Computing | Leave a Comment »

Comparison of Qualnet and NS2 network simulators

Posted by Chetan R. Solanki on February 7, 2013

                         Qualnet                              NS2
1. Commercial simulator, based upon GloMoSim simulator. 1. Freely available for research and educational purposes.
2. Uses the parallel simulation environment for complex systems (PARSEC) for basic operations, hence can run on distributed machines. 2. Runs on a single machine, no parallel execution support in NS2.
3. Mainly developed for wireless scenario simulations, but wired networks also supported. 3. Mainly developed for wired networks, but its CMU extension facilitates the wireless network simulation.
4. Qualnet includes a graphical user interface for creating the model and its specification. So, it is very easy to specify small to medium networks by using the GUI. 4. To create and simulate a model, we have to specify all connections in a special model file manually. Uses OTcl for model file specifications.
5. Since it uses primarily Java for the GUI, it is available for Linux as well as for Windows. The simulator itself is the specified target system optimized C program. 5. It is designed for Unix systems but runs under Windows CygWin as well.
6. Faster simulation speeds and greater scalability are achievable through smart architecture and optimized memory management of Qualnet. 6. Not as fast and scalable.
7. Not used much in research as it is not freely available, hence lesser support (code samples etc) available on Web. 7. Widely used for research, hence large number of resources available on Web.
8. Simulation of wireless sensor networks is supported in Qualnet 4.5 (using ZigBee library). 8. No such support available as of now.
9. Simulation of GSM mobile networks also supported. 9. GSM not supported in NS2.
10. Includes a variety of advanced libraries such as mesh networking, battery models, network security toolkit, and a large number of protocols at different layers. 10. These advanced libraries for wireless support are not available in NS2.
11. Includes a 3D visualizer for better visualization of a scenario. 11. 2D animator (NAM) is used with NS2.
12. Much easier for beginners who want to evaluate and test different existing routing protocols, as it can be done with GUI, without writing even a single line of code. 12. Requires knowledge of Tcl scripting before you can begin with NS2.
13. For implementing new protocols, Qualnet uses C/C++ and follows a procedural paradigm. 13. NS2 also uses C++ for new protocol/model development, but it uses object-oriented paradigm (usage of different classes) for programming.

Posted in Wireless Sensor Neworks | Tagged: | 2 Comments »

Grid Computing definition

Posted by Chetan R. Solanki on January 13, 2013

  • Grid computing is a computer network in which each computer’s resources are shared with every other computer in the system. Processing power, memory and data storage are all community resources that authorized users can tap into and leverage for specific tasks. A grid computing system can be as simple as a collection of similar computers running on the same operating system or as complex as inter-networked systems comprised of every computer platform you can think of.
  • Grid computing is the federation of computer resources from multiple administrative domains to reach a common goal. The grid can be thought of as a distributed system with non-interactive workloads that involve a large number of files. What distinguishes grid computing from conventional high performance computing systems such as cluster computing is that grids tend to be more loosely coupled, heterogeneous, and geographically dispersed.[1] Although a single grid can be dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids are often constructed with general-purpose grid middleware software libraries.
  • Grid computing is a form of networking. Unlike conventional networks that focus on communication among devices, grid computing harnesses unused processing cycles of all computers in a network for solving problems too intensive for any stand-alone machine.A well-known grid computing project is the SETI (Search for Extraterrestrial Intelligence) @Home project, in which PC users worldwide donate unused processor cycles to help the search for signs of extraterrestrial life by analyzing signals coming from outer space. The project relies on individual users to volunteer to allow the project to harness the unused processing power of the user’s computer. This method saves the project both money and resources.
  • The grid computing is a special kind of distributed computing. In distributed computing, different computers within the same network share one or more resources. In the ideal grid computing system, every resource is shared, turning a computer network into a powerful supercomputer. With the right user interface, accessing a grid computing system would look no different than accessing a local machine’s resources. Every authorized computer would have access to enormous processing power and storage capacity.
  • Grid computing combines computers from multiple administrative domains to reach a common goal, to solve a single task, and may then disappear just as quickly.One of the main strategies of grid computing is to use middleware to divide and apportion pieces of a program among several computers, sometimes up to many thousands. Grid computing involves computation in a distributed fashion, which may also involve the aggregation of large-scale cluster computing-based systems.
  • Grid computing is a technology used to harness computing powers from various sources and use them in harmony to achieve a specific goal. The great advantage of grid computing is the ability to significantly reduce the time that is taken to accomplish that goal, thereby increasing efficiency.Most applications of grid computing are ones where the computing resources of one computing unit prove to be insufficient for the task at hand. The computing unit in question could potentially range from a single personal computer to a supercomputer within a large organization.For example, a weather forecasting unit would require multiple variables and calculations within the program. Computing various scenarios and determining the probability of each scenario requires a large amount of computing power and time. The data that is required for such a task needs to be current, and the results need to be available within a certain time frame. This is an ideal application for grid computing.

Posted in Grid Computing | Tagged: | Leave a Comment »

Cloud Computing vs. Grid Computing

Posted by Chetan R. Solanki on January 13, 2013

Introduction

Cloud computing and grid computing are two relatively new concepts in the field of computing. They are often mistaken for the same thing, however that is not the case at all.
Both grid and cloud computing are networks which abstract processing tasks. Abstraction masks the actual complex processes taking place within a system, and presents a user with a simplified interface with which they can interact easily. The idea is to be able to make the system more user-friendly whilst retaining all the benefits of more complicated processes.

Although there is a difference in the fundamental concepts of grid and cloud computing that does not necessarily mean that they are mutually exclusive; it is quite feasible to have a cloud within a computational grid, as it is possible to have a computational grid as a part of a cloud. They can also be the same network, merely represented in two different ways.

Advantages of Distributed Computing

Distributed computing, as one can imagine, is where the computing elements of a network are spread over a large geographical area. Both cloud and grid computing are prime examples of distributed computing architectures.

The main advantage of this sort of environment is the ability to tap into multiple areas of expertise, using a single resource. For example, in a cloud computing environment, there are often multiple servers which can each perform a single task excellently. Using the cloud gives a user access to all these servers through one interface.Computing elements come with their own set of requirements, like appropriate storage, physical security and regular maintenance, among other things. In a distributed computing environment, since the elements are spread out, these cost heads are also distributed accordingly.

There are many architectures for distributed computing environments; the focus of this article is on cloud computing and grid computing.

Cloud Computing

Cloud computing is an extension of the object-oriented programming concept of abstraction. Abstraction, as explained earlier, removes the complex working details from visibility. All that is visible is an interface, which receives inputs and provides outputs. How these outputs are computed is completely hidden.

For example, a car driver knows that a steering wheel with turn the car in the direction they want to go; or that pressing the accelerator will cause the car to speed up. The driver is usually unconcerned about how the directions of the steering wheel and the accelerator pedal are translated into the actual motion of the car. Therefore, these details are abstracted from the driver.

A cloud is similar; it applies the concept of abstraction in a physical computing environment, by hiding the true processes from a user. In a cloud computing environment, data can exist on multiple servers, details of network connections are hidden and the user is none the wiser. In fact, cloud computing is so named because a cloud is often used to depict inexact knowledge of inner workings.

Cloud computing derives heavily from the Unix paradigm of having multiple elements, each excellent at one particular task, rather than have one massive element which isn’t as good.

Grid Computing

Grid computing harnesses the idle processing power of various computing units, and uses that processing power to compute one job. The job itself is controlled by one main computer, and is broken down into multiple tasks which can be executed simultaneously on different machines. These tasks needn’t be mutually exclusive, although that is the ideal scenario. As the tasks complete on various computing units, the results are sent back to the controlling unit, which then collates them forming a cohesive output.

The advantage of grid computing is two-fold: firstly, unused processing power is effectively used, maximizing available resources and, secondly, the time taken to complete the large job is significantly reduced.

For a job to be suited to grid computing, the code needs to be parallelized. Ideally the source code should be restructured to create tasks that are as mutually exclusive as possible. That is not to say that they cannot be interdependent, however messages sent between tasks increase the time factor. An important consideration when creating a grid computing job is that whether the code is executed serially or as parallel tasks, the outcome of both must always be equal under every circumstance.

Cloud Computing vs. Grid Computing

The difference between grid computing and cloud computing is hard to grasp because they are not always mutually exclusive. In fact, they are both used to economize computing by maximizing existing resources. Additionally, both architectures use abstraction extensively, and both have distinct elements which interact with each other.

However, the difference between the two lies in the way the tasks are computed in each respective environment. In a computational grid, one large job is divided into many small portions and executed on multiple machines. This characteristic is fundamental to a grid; not so in a cloud.

The computing cloud is intended to allow the user to avail of various services without investing in the underlying architecture. While grid computing also offers a similar facility for computing power, cloud computing isn’t restricted to just that. A cloud can offer many different services, from web hosting, right down to word processing. In fact, a computing cloud can combine services to present a user with a homogenous optimized result.

There are many computing architectures often mistaken for each other because of certain shared characteristics. Again, each architecture is not mutually exclusive, however they are indeed distinct conceptually.

Posted in Cloud Computing, Grid Computing | Tagged: | Leave a Comment »

Cloud app vs. web app: Understanding the differences

Posted by Chetan R. Solanki on January 2, 2013

The line between a cloud app and a web app remains as blurry as ever. This of course stems from the natural similarities that exist between them. I’m of the opinion, however, that there are noteworthy differences, especially when looking to leverage cloud apps for richer user customization experience and seamless integration with resilient and scalable back-end infrastructure, which often characterizes public cloud services.

Webolution

Just how different, similar or even blurry are these concepts? How is this of any concern to cloud consumers? And what should application service providers do to revolutionize their web apps for the cloud?

Cloud app

For me, a cloud app is the evolved web app. It’s equally used to access online services over the Internet like web apps but not always exclusively dependent on web browsers to work. It’s possible for a customizable, multi-tenancy cloud app to be solely available over the web browser from service providers, but quite often the web-interface is used as alternative access methods to the custom built cloud app for online services.

Cloud apps are usually characterized by advanced features such as:

  • Data is stored in a cloud / cloud-like infrastructure
  • Data can be cached locally for full-offline mode
  • Support for different user requirements, e.g., data backup cloud app with different features such as data compression, security, backup schedule
  • Can be used from web browser and/or custom built apps installed on Internet connected devices such as desktops, mobile phones
  • Can be used to access a wider range of services such as on-demand computing cycle, storage, application development platforms

Examples of cloud apps

Some common examples include Mozy, Evernote, Sugar Sync, Salesforce, Dropbox, NetSuite, and Zoho.com. Other qualifying examples such as web email (Google, Yahoo, Microsoft Hotmail, etc.) may not be so obvious, but they depend on cloud technology and are available off-line if consumers so choose to have them configured as such.

There are numerous websites where you can find useful information on cloud apps. I foundwww.getapp.com to be particularly informative. It includes cloud app reviews and ratings to evaluate the apps.

Web apps

Web apps on the other hand are almost exclusively designed to be used from a web browser. A combination of server-side script (ASP, PHP etc) and client-side script (HTML, JavaScript, Adobe Flash) are commonly used to develop the web application. The web browser (thin client) relies on the web server components installed on backend infrastructure systems for the heavy lifting in providing its core functional web services.

The obvious benefit that this computing model provides over the traditional desktop app is that it is accessible from anywhere via the web browser. Cloud apps can also be accessed this way.

Examples of web apps

For many, including myself, web services such as WebEx, electronic banking, online shopping applications, and eBay fall into this category in as much as they are exclusively web-based with limited options for consumer customization.

In another example, I would include Facebook and similar types of web applications. I’m sure some will disagree with this, but I don’t think Facebook exactly offers customized services. It’s simply used as it is provided.

Conclusion

Application service providers have been quick to exploit advantages brought about by pioneering web app building framework technologies for greater customer reach. However these technologies are not necessarily optimized for building new apps for the cloud era.

Cloud apps are web apps in the sense that they can be used through web browsers but not all web apps are cloud apps. Software vendors often bundle web apps to sell as “cloud” apps simply because it’s the latest buzz-word technology, but web apps do not offer the same richness in functionality and customization you’ll get from cloud apps. So, buyer beware!

Some software application vendors also falsely think that just because their application runs on the web, this automatically qualifies it to be a cloud app. This is not always the case. For your web app to evolve into a cloud app, it should exhibit certain properties such as

  • True multi-tenancy to support various requirements & needs for consumers
  • Support for virtualization technology, which plays a starring role for cloud era apps. Web applications should either be built to support this or re-engineered to do so

The good news is that vendors looking to move into this cloud app space now have rich development platforms and frameworks to choose from. Whether migrating from an existing web app or even starting from scratch. These new age cloud app development platforms are affordable and agile, reducing time to market and software development complexities.

VMware Cloud foundryGoogle apps EngineMicrosoft AzureAppcara, Salesforce (Heroku andForce.com), AppFogEngine YardStanding Cloud, and Mendix are examples of such development platforms offering cloud-based technology for building modern applications.

Posted in Cloud Computing | Leave a Comment »