The Next Cloud Computing Markets

Recently a friend of mine started smartcloud.ca. A Toronto based startup that resells cloud services as a package deal with support. At first I thought sounded like consulting, they would listen to the companies needs, and help them select among the cloud service offerings, and then handle registration and migration, support, & training. After migration his business would be the primary support and training providers, while also being a liaison between the customers and the cloud services.

But consultants sell expertise for a single project, while this company is selling cloud service management and training in a pay as you go model. Like how commercial Linux has operated for years; navigate the maze of software, and package best of breed solutions with support, marketing and in-house technology to present a package that is all-inclusive, practical and easy to understand. But will it work?

It sounds to me like a good partnership for cloud services. This company provides marketing, training, sales and branding to get cloud services into companies where web-based advertising (blogs, twitter, Facebook) don’t have a lot of pull. To the cloud services they offer revenue and feedback. The aggregate has an incentive to compose feature requests, and even pay for them via bounty systems. They might also provide insight into how cloud services should work together.

And just like how RedHat provides a package manager and support tools, a cloud aggregate probably would use technology to manage its myriad offering. For example, locking down a dozen cloud apps consistently (as employees come and go) is a difficult problem. Standardized API’s and could make this a lot easier, and an aggregate cloud company would be the ones to push for this sort of thing.

Companies usually consider outsourcing when looking for solutions. I’d bet more than a few would consider such a model; it might be a very powerful solution in the next few years.

This is all speculation at this point, but I’d like to hear your thoughts on it. Yeah/nay? Know anyone else in this market? Give me a heads up.


Lisp Forum: Its like sudoku but for lispers.

I thought I’d plug Lisp Forum, a forum I frequent that answers Lisp related questions; I go by the name “Warren Wilkinson”.

I go when I’m taking a break from my code. The problems posted by new lispers are often short and stand-alone, like a good puzzle. And you learn a lot too; In this post new user aaron4osu created a cypher to permutate the words in a file. His version of randomize-list was very clever.

Randomize List


(defun randomize-list (list)  
  (mapcar #'car (sort (mapcar (lambda (c) (cons c (random 1.0))) list)
                      #'> :key #'cdr)))

I’ve written randomize-list before, but mine worked by taking items from the list in random order to build up a new list:


(defun randomize-list (a)
  (let ((a (copy-list a))
        (b nil)
        (n (length a)))
    (loop repeat n
          for place = (random (length a))
          for item = (elt a place)
          do (push item b)
          do (setf a (delete item a :start place :count 1)))
    b))

Doing it aaron4osu’s way is about 3 times faster, and you can make it slightly faster by consing random fixnums and using a fixnum-only <.


(defun fixnum< (a b) (> (the fixnum a) (the fixnum b)))
(defun randomize-list (list)  
  (mapcar #'cdr (sort (mapcar (lambda (c) (cons (random #xFFFFFF) c)) list)
                      #'fixnum< :key #'car)))

Removing Duplicates

Surprisingly, it’s faster to remove duplicates after than to filter them before. Using remove-duplicates is about 20 times faster than checking them before hand (but using a hash table to test inclusion is much quicker, but still twice as slow as remove-duplicates). In SBCL, using delete-duplicates was really slow.

Replacing non-destructive routines with destructive ones (e.g. map vs map-into, remove-duplicates vs delete-duplicates, a new string vs replace) makes things slower.

Templating

In this post, I probably scared newbie indianerrostock away, when he asked for improvements to his templating program.

Given a text file that contained #SPORTNAME# in multiple places, his program replaced that with the name of a sport. The kicker was it did it several times in batch for many sport names.

His method was the obvious one: Loop through the string searching for #SPORTNAME# while making a copy of the string with #SPORTNAME# replaced.

My solution was to sort the sport names biggest to smallest and precompute where all the #SPORTNAMES# where, and then replace them in the source string (i.e. no duplicate string, no memory allocation, etc) with a sport name. Then I would repeat this replacing the previous sport name with the next one.

My super optimized no-allocation and no-searching method was 2.5 times slower. It boggles the mind.

How Fast is Sorting?

This post starts out about CLOS and ends with a discussion of sorting. SBCL sorts lists using merge sort, but vectors with heap-sort. stable-sort for both vectors and lists uses merge-sort.

Its faster to use STABLE-SORT for vectors.

Also, merge-sort works well without random access, so sorting singly linked lists is fast. There is no need to convert the list into a vector before sorting.

Why So Counter-intuitive?

Well, I think a lot of this has to do with memory allocation in Lisp. Lisp is garbage collected, most Lisps use a stop-and-copy garbage collector. This is like a constant defrag on your memory. One benefit of which is that allocation is VERY FAST — far faster than C’s malloc. In C you’d have to search a binary tree for available space , but in lisp, you just increment the memory-end pointer.

If you made a linked list a node at a time C would have you search a hundred times, while Lisp would increment a pointer 100 times. C’s linked list would be scattered in memory, while Lisp’s would be continuous.

This is why Lisp is so fast at functional programming. You create a lot of garbage, but you create it fast, and it goes away fast.


Forth Databases

“Over the years, applications were added and discarded, including a real-time breath-by-breath exercise testing system, and various database applications. It migrated to a PDP 11/84 in 1987 and then the application source was rewritten for LMI’s UR/Forth on a PC in 1998.”

“Today the main applications are all database applications; the real-time applications having been replaced by turnkey systems that connect serially. The main applications are RT order entry, billing, PFT/Exercise data and ABG lab data. There are over 5,000 blocks of active Forth source code – perhaps”

Forth Success Stories

Most languages have some libraries to connect to databases; modern languages often include one in the standard distribution. Forth is an exception. There don’t seem to be libraries to connect to standard SQL databases, and forth programmers don’t care. When forth programs need a database, they don’t use familiar ones. Read the rest of this entry »


Say NO! to Net Neutrality

I would like to keep this blog primarily technical[1], but it looks to me like the majority of IT land is in favor of net neutrality and I think that’s a mistake. Read the rest of this entry »


A Handful Of Useful Lisp Programming Tricks

Here are a few useful Lisp tricks, some are applicable to other languages. Read the rest of this entry »


Totals & Subtotals in FormLis

Totals are one of the coolest features in FormLis. When you make a form, you can add ‘views’ to show the submitted data, and these views can have totals (see:
Fullfilled Requests by Quarter in the Purchase Request Tracker, Time Spent Per Person in the Legal Time Tracker, and All Campaigns in the Marketing Manager Demo). For a while, FormLis let you define totals using an accumulator style. This was powerful, and power is good, but not at the cost of simplicity; I’m going to migrate to something like what I had before. Read the rest of this entry »


Advice on Writing Big Programs

Programming is learnt through a series of throw away programs that progress from childish to advanced. But the gap from amateur to professional is so large that no prior experience adequately prepares you. A professional program is bigger and you can’t escape the hard parts: bugs must be fixed, deployment must be addressed, interfaces must be understandable. Novices’ avoid these tasks because they are less interesting than programming problems. Professionals learn to address them. Read the rest of this entry »


Follow

Get every new post delivered to your Inbox.

Join 34 other followers