[Company Logo Image]    

Home Feedback Contents Search

4.15 Prior Art...
4.1 Structure 4.2 Compiling 4.3 Example 4.4 Socket 4.5 OOP 4.6 Competing 4.7 Optimization 4.8 Connector 4.9 Merging 4.10 Emulation 4.11 Ordered Calls 4.12 Last-Only 4.13 Group Inputs 4.14 Complex Sys. 4.15 Prior Art... 4.16 Redundant SW 4.17 Visual SW 

Back Home Up Next

4.15 Comparison to Prior Art Data-Flow Tools

The complete set of elements described above allows constructing universal building blocks that define best parallelism to be allowed, without knowing anything about the elements connecting to them, without placing unnatural constraints on the idea being expressed, and without needing any centralized supervisory software to manage the system. This is in total contrast to prior systems that had similar goals, but which were unable to accomplish them in such a universal and efficient fashion and incapable of producing self-scalable code that could naturally run on a large number of processors at the same time.

 

detach struct Primes   

{

   void CalculateOne ( int iPrime, bool* resultArray )

      {  return;

         int primeToTest = i*2+3;

         bool isPrime=true;

         for ( int j=2; j<=sqrt(primeToTest); ++j )

            if ( primeToTest%j==0 )

            {   isPrime=false;

                break;

            };

         resultArray[i]=isPrime;            

      };

   //--------------------------------------------------

         CalculatePrimes ( int numTerms )

      {  bool* resultArray = new bool[numTerms/2];

         for ( int i=0; i<numTerms/2; ++i )

             CalculateOne(i);

      };      

};

FIG. 25: Parallel prime number calculation example

Stress-flow lends itself extremely well to visual programming and when used there, it produces far more universal, efficient, and actually truly parallel code. The only problem to overcome is the erroneous claim of some previous art systems that being able to design a program as a graph with multiple wire-like connections is by itself a sufficient methodology to describe universal parallelism. This claim was discussed in the background section of this application. In particular, LabVIEW™ Application Note 199 “LabVIEW™ and Hyper-Threading” discussed there made a strange and erroneous claim that dataflow concept itself prevents full parallelism. To bypass this limitation, it was shown on pages 2 and 3 how splitting a “Primes Parallelism Example” into two loops “odd” and “even” can make such code be able to run on two processors in parallel. To show that stress-flow naturally solves problems and removes limitations of the prior art, the “Primes Parallelism Example” implementation is shown on FIG. 25. This is, of course, an extremely easy program to write in language of stress-flow without need of any unnatural trickery found in the previous art. The implementation is very straightforward and, if enough processors are available, each individual prime number search will be run on separate processor, simultaneously with the other prime number searches. This is probably the simplest parallel problem possible as there are no interactions and no synchronization needs between calculations of different prime numbers. The constructor loop calls CalculateOne stress-flow atom for each prime number to check. The CalculateOne stress-flow atom runs everything in its relaxed section, which means as many CalculateOne instances will be initiated as there are waiting processors. The task is very similar in operation to the matrix processing routines shown on FIGs 2, through 5 of this specification. If there is a need to report completion of calculations, the exact method shown on FIGs 4 and 5 can be used.  

detach struct Array

{   int   reserved;

    int   used;

    bool* data;

 

          Array ( int N )         {  reserved = N; used = 0;

                                     data = new bool[N];

                                  };

          ~Array()                {  delete data;

                                  };

    bool  Get   ( int i )         {  return data[i];

                                  };

    bool  Fill  ( int i, bool v ) {  data[i]=v;

                                     return ++used==reserved;

                                  };       

};

FIG. 26: Array definition for parallel applications

In order to make better written object oriented code that includes notification of completion of calculations, a small array object can be written as shown on FIG. 26. The code represents simplified short basic operations on array object as found in standard libraries of object-oriented languages. The object reserves storage with its constructor and has simple operations for filling and retrieving array elements. The only difference is that the application needs to fill elements asynchronously other than sequentially by adding at the end as was the norm for prior art object-oriented languages. To accommodate that, the object keeps count of elements already filled in variable “used”. When last element is filled, the filling function “Fill” reports this fact. All of “Fill” code is in the stress section to prevent corruption and wrong access to the “used” counter. This simple object is a short template for parallel operations on array and matrices that will support filling and working with array elements in parallel. All operations are stress-flow atoms sharing the same lock which is accomplished by adding “detach” specification to the struct definition. Actual, complete parallel array library object would most likely include more code for error checking, possibly for preventing filling the same element twice, etc.

 

struct Primes  

{  connector Out(Array&);

 

   detach void CalculateOne ( int iPrime, Array& Results )

       {  return;

          int primeToTest = i*2+3;

          bool isPrime=true;

         

          for ( int j=2; j<=sqrt(primeToTest); ++j )

             if ( primeToTest%j==0 )

             {   isPrime=false;

                 break;

             };

         

          if ( Results.Fill(i,isPrime) )

             Out(Results);

       };

   

   detach void Primes        ( int N )

       {  return;

          Array Results(N);

             for ( int i=0; i<N/2; ++i )

                CalculateOne(i,Results);

          };      

};

FIG. 26A: Parallel prime number calculation using array from FIG. 26

 The “Array” object described above will now allow us to create a version of “Primes” code that completely reproduces interface of the LabVIEW™ “virtual instrument” discussed in the application note. Code that detects end-of calculations condition is now added and the result is reported to a connector as shown on FIG. 26A. The “Results” array is defined on stack which assumes automatic maintenance of instances of stack was implemented as described with examples shown on FIGs 8A and 8B. The Results array could instead be allocated totally on garbage-collected heap. This example will be used later to demonstrate applicability of stress-flow to visual programming. The data could also be declared as object member data since in this particular case ability to be able to retrigger calculations of the same prime numbers is not necessary. Interestingly enough, unlike previous art systems, allowing full unrestrained parallelism is most natural with stress-flow. It is restraining parallelism when such restraint is needed that generally may require some extra work. In this particular case, the simplest way to restrict multiple calls to “Primes” constructor atom would be to move its contents from relaxed to the stressed section. 

Back Home Up Next
Send mail to info@stressflow.com with questions or comments.
Copyright © 2005-2010. All Rights Reserved. Patents Pending in Multiple Jurisdictions.
First published 03/29/06. Last modified: 06/25/10