Evented Ruby


http://www.youtube.com/watch?v=4iFBC-xbE9I

 

Rails Concurrency

tweet = Tweet.new(params["tweet"]) # 1.

tweet.shorten_links! # 2. network

tweet.save # 3. disk

 

Node Concurrency

tweet = new Tweet(); // 1.

tweet.shortenLinks(

function(tweet) { // 2. callback

tweet.save(

function(tweet) { // 3. callback

}

}

)

we’re doing blocking I/O by going to the network to shorten_links (bit.ly)via an API and going to the disk to persist our data.

Event driven programming with call backs
http = EM::HttpRequest.new('http://railsconf2012.com/').get
http.callback{
    //request finished, do next stuff
}

so if railsconf2012 takes for ever to load we have registered a call back for it and moved on!!!!!
OR
Faraday.default_adapter = :em_synchrony
response = Faraday.get 'http://railsconf2012.com/'


Fibers are primitives for implementing light weight cooperative concurrency in Ruby. Basically they are a means of creating code blocks that can be paused and resumed, much like threads.The main difference is that they are never preempted and that the scheduling must be done by the programmer and not the VM.

Evented Interface

redis.subscribe(channel) do |on| on.message do |channel, msg|

          # happens in the future

end end

• •

only use blocks for semantic events hide system events in library code

Thursday, April 26, 12

When we’re talking about blocking IO, I think it’s best that we hide the event behind a synchronous interface so we can keep our domain abstraction clean. But just because we can use fibers to hide our evented plumbing, it doesn’t make sense for all events to be hidden behind a synchronous interface.

So for example, even though this code snippet is using blocks as a way to register for publish events in redis, we generally don’t want to try and hide this because the event is the subject of what we’re talking about.


http://stackoverflow.com/questions/9052621/why-do-we-need-fibers




One Request per Fiber

wrap each request in it’s own fiber

web requests are independent from one another

switch between requests instead of processes

We have a reactor in one fiber, but we still need to have requests in their own fibers so we can pause and resume different fibers.

The web makes this extra easy because web requests are naturally independent of one another.

Once we have one request per fiber, the our reactor app server can switch between fibers when there’s blocking I/O, rather than switching to a different process



Rack::FiberPool

# Rails

config.middleware.prepend Rack::FiberPool

# Generic rack apps: sinatra, grape, etc

use Rack::FiberPool


Because Rails is rack, it’s very easy to wrap each request with a fiber. There’s a gem called Rack::FiberPool that does it for you. We basically add it to the top of Rail’s middleware stack, and all incoming requests will be wrapped in it’s own fiber.

And since it’s rack, adding it for any rack application is also easy. So you can also add this for sinatra, or other ruby web frameworks



Recap

App server is reactor-aware One fiber per request
App code is unchanged

The only infrastructure change we’ve made is change an app server. But there’s good odds that you’re already using an app server that’s reactor-aware

The only code change we’ve done is to configure a middleware that wraps each request in it’s own fiber. We haven’t touched any of our models, controllers, or views. All of that continues to work like it used to.

But if you try and benchmark your application at this point, your app isn’t going to feel any faster.



Starting Points
data stores
redis
mysql
postgresql
mongo
http
Faraday: use an EM-aware http adapter
Kernel.system call
EM.popen - non-blocking version
EM.defer - run blocking call outside of reactor thread





Comments