Sunday, February 14, 2016

JavaScript will evolve faster as a language than a compile target

Author:Kamanashis Roy
Note: I wrote this blog on request

JavaScript survived and thrived to be one of the most popular language for web development. There was concern over the best-practice and performance issue in JavaScript. Some people went further to say JavaScript language is Buggy. But it served it’s purpose as client-side scripting.

The main criticisms against JavaScript were,
- JavaScript has fragmented implementation( because of the lack of standardization and the browser-war.)
- Standardization that resulted JavaScript to be compiler target rather than a language itself.
- JavaScript is slow.

The criticism against the JavaScript gave rise to compilers that target it. These source-to-source compilers generate acceptable JavaScript code. One of them being Google Web Toolkit. It generates JavaScript from Java. The idea is to impose the best practices. Again after standardization and wide-spread acceptance of JavaScript there have been development in source-to-source compilers that target JavaScript. But JavaScript eventually is more popular for web-development than those compiled language. Let’s see how JavaScript got away with the criticisms.

Fragmented implementation and lack of standardization

The browser-war gave rise to reversed-engineered implementations of JavaScript. And individual programmers could not effort time to write JavaScript for each browser. So some people preferred to generate automated code for different browsers using compilers.

To solve the problem Netscape allowed it to be standardized as ECMAScript. Now there are multiple open-source implementations of JavaScript. And the most popular of them uses Just-in-time compilation. So it provides better opportunity to write uniform code for the developers. Again there are frameworks like jquery that wrap the platform depended code to give an uniform API. Finally the fragmentation is not a problem anymore.

Standardization that resulted javaScript to be compiler target

Again the cross-browser compatibility of javaScript gave the rise of source-to-source compilers (such as Emscripten) that target asm.js and NaCl. The asm.js is precompiled(ahead-of-time compilation) javaScript that allow the browser to run asm.js based application much faster.

JavaScript is slow

JavaScript performance is affected for varied reason. One of them being the support for dynamic type. Another being the dynamic binding. There are ways to get away with this things. The V8 engine actually uses JIT compiler that compiles the javaScript into native code on runtime, instead of interpreting. And the V8 developers compare the JavaScript performance with C/C++. They have some best practice suggestions about JavaScript. The idea boil down to ,

  • not to use undefined rather use static types,
  • to avoid arrays of varying types,
  • finally to use less constructors and initialize all the variables in the constructors.

The poor coding can be identified by some static analysis. And these tools can be integrated in the IDE and browser. This way the programmers get notified of the performance issues.

Frameworks are the right way

Now there are some emerging trends of development regarding JavaScript. It is done by some frameworks and runtimes. Some of them are,

  • jquery(It is more like library than a framework with easy-to-use API with a lot of open-source modules),
  • prototypejs,
  • threeJS,
  • reactJS,
  • backboneJS
  • nodeJS(Not a framework but a runtime for server-side-javascript and desktop application, it features a lot of packages/modules as npm),

Some of them are domain specific and some are for general use. These frameworks feature popular programming patterns for object based language. They allow us to use those well-written patterns. It makes the user code small. It is like “write less do more”. Less code gives less bugs and may be less performance hazards. In that sense less is good. Again these frameworks allow us to write browser independent code. These frameworks also take advantage of the ahead of time compilation. This is why they are immensely popular.

A note on nodeJS

NodeJS is the JavaScript runtime that works on the vast range of platforms. It uses event-driven approach of concurrent programming which is closely relates to observer-pattern that is popular in Object-Oriented paradigm. There were a lot of issue dealing the concurrency and scalalibity in server-side scripting. NodeJS was keen to solve the problem in event-driven way. And as JavaScript is already popular in the web-development arena, NodeJS got huge success. It is supported by big companies. And there is an package-ecosystem, npm, based on nodeJS.

Again there are a lot of frameworks that work on top of NodeJS. Some of them being Express.js, and Koa.js. And there is popular Atom IDE(runs over ioJS/(nodeJS variant) runtime) that support development based on NodeJS and JavasCript.


As a C developer, I want to share that I like C so much. But I also knew that it contains a lot of blotted code. It makes things harder because it does not have a memory manager. But one intriguing thing about C is the portability as the ability to compile it in wide range of platforms, in the same way JavaScript has portability in the browser.

I worked on those things and I wrote a compiler that eventually generates C code out of Object-Oriented garbage collected code. And I grew some framework to support server-side development. I targeted C the same way that the source-to-source compilers target JavaScript.

But sometimes I get back to C for some reason. I discovered that my framework written in C also support doing cool things with small dependable code. I think my compiled-framework gave me greater understanding of cool stuff in C.

I think the compilers targeting the JavaScript are also giving new directions for JavaScript based development. JavaScript is now getting developer attractions by featuring runtimes, frameworks and IDEs. In fact there are a lot of packages available on these frameworks. Now they are also being used as desktop tools(GUI), command-line application and server-side scripting.

Wednesday, February 3, 2016

Memory profiler for shotodol framework

C programming language does not come up with garbage collector. It is not object oriented. But C is inevitable. C programs are portable into different platform. A lot of libraries and server applications are written in C. The Aroop compiler gives the developer flexibility to write object oriented Vala code that can generate C code. Likewise it is also possible to use existing C library in Vala code.

So Aroop is the source-to-source compiler. It compiles Vala code into C. The Aroop compiler is a fork of original Vala compiler. The Vala compiler generates code that uses the GObject library to represent the Object data. On the contrary the Aroop uses object pools.

Shotodol is an application framework based on Aroop compiler. It has instrumentations for memory profiling, dependency injection and more. In this blog we will disect the memory profiling feature.

The profiler can give the memory usage of specific module, it can dump existing string allocations and more.

help prof
Executing:help prof
<            help> -----------------------------------------------------------------
         -heap          <none>  Show all the memory allocated in all the factories
       -string          <none>  Dump the string buffers
       -object          <none>  Show the objects
       -module          <text>  Select/filter out a module
<      Successful> -----------------------------------------------------------------

For example, if we investigate the memory usage of “core/console” module, we get the following.

prof -module core/console
Executing:prof -module core/console
<        profiler> -----------------------------------------------------------------
Source               Line  Module     -- pools      allocated  used       objects    slots      bitstring  pool size 
vsrc/ConsoleCommand.  2116 core/conso --          0          0          0          0          0         56         16
vsrc/ConsoleCommand.   954 core/conso --          0          0          0          0          0         56         16
vsrc/ConsoleCommand.   804 core/conso --          1      13792       8560         10         10         56         16
13792 bytes total, 8560 bytes used
<      Successful> -----------------------------------------------------------------

The table above says the line number where the memory pools are created. “vsrc/ConsoleCommand.c” is the generated C source. The pool allocates 16(it is set when the pool is created) slots at a time. Each slot size is assigned when the pool is created. The first two rows say that the pool is empty. The last row says that it contains 10 objects taking total 8KB space while the pool has 13KB(16 slots) allocated. And it says there is only one pool.

The profiling helps to set the optimal pool size. In optimal condition the pool does not waste too much unused memory and there is less system-call for malloc for pool creation.

It also identifies the memory leak or circular refernce that can invalidate garbage collection. This feature is particularly useful in the server application. In that scenario memory-leak is not desirable at all.

I hope to describe other shotodol features in the future. I just like to explain what the cool things it does.

Monday, February 1, 2016

Line of code

It is nice to have some numbers associated with my identity. I like the numbers and I like to define myself. As a part of that I did a project of know “which language you are”.

I admire the cloc application. It calculates the line of code out of an archive. Line of code gives a number that measure the amount of work. In that sense my work summary is as below.

File files blank comment code
libsync.txt 241 8649 9998 56467
aroop.txt 256 3561 2506 22042
miniim.txt 88 1757 3824 8572
shotodol.txt 134 574 867 6564
roopkotha.txt 66 737 990 4903
opp_factory.txt 23 517 308 3875
roopkotha_vela.txt 56 302 638 2689
hciplus.txt 29 207 141 2089
barrel.txt 47 270 135 1864
shotodol_net.txt 20 115 161 1454
shotodol_web.txt 20 116 189 1151
shotodol_db.txt 12 74 122 689
shotodol_script.txt 8 34 39 354
shotodol_media.txt 2 12 20 100
SUM: 1002 16925 19938 112813

That is glorious. 112K lines of code is big work. Needless to say, it only calculates major works.

Language files blank comment code
C 127 8070 7770 58476
Vala 424 3483 3279 27305
C/C++ Header 171 1662 3710 6752
Java 30 830 2867 3868
Vala Header 19 283 245 3387
C++ 16 448 589 2959
PHP 46 272 173 1907
CSS 19 372 138 1789
XHTML 26 325 62 1463
JavaScript 13 238 727 1438
make 66 571 50 1423
Lua 25 198 110 1252
Bourne Shell 7 51 93 221
Ant 2 22 28 155
m4 2 28 17 110
Perl 1 32 36 107
Assembly 2 14 41 84
HTML 2 6 0 41
XML 1 0 3 27
Bourne Again Shell 1 9 0 24
NAnt script 1 9 0 23
DOS Batch 1 2 0 2
SUM: 1002 16925 19938 112813

I also wrote small script to generate pie chart out of this data.