ruby on rails - Thread running in Middleware is using old version of parent's instance variable -


i've used heroku tutorial implement websockets.

it works thin, not work unicorn , puma.

also there's echo message implemented, responds client's message. works on each server, there no problems websockets implementation.

redis setup correct (it catches messages, , executes code inside subscribe block).

how work now:

on server start, empty @clients array initialized. new thread started, listening redis , intended send message corresponding user @clients array.

on page load, new websocket connection created, stored in @clients array.

if receive message browser, send clients connected same user (that part working on both thin , puma).

if receive message redis, user's connections stored in @clients array. weird thing happens:

  • if running thin, finds connections in @clients array , sends message them.

  • if running puma/unicorn, @clients array empty, if try in order (without page reload or anything):

    1. send message browser -> @clients.length 1, message delivered
    2. send message via redis -> @clients.length 0, message lost
    3. send message browser -> @clients.length still 1, message delivered

could please clarify me missing?

related config of puma server:

workers 1 threads_count = 1 threads threads_count, threads_count 

related middleware code:

require 'faye/websocket'  class notificationsbackend    def initialize(app)     @app     = app     @clients = []     thread.new       redis_sub = redis.new       redis_sub.subscribe(channel) |on|         on.message |channel, msg|           # logging @clients.length here return 0           # [..] retrieve user           send_message(user.id, { message: "echo: #{event.data}"} )         end       end     end   end    def call(env)     if faye::websocket.websocket?(env)       ws = faye::websocket.new(env, nil, {ping: keepalive_time })       ws.on :open |event|         # [..] retrieve current user         if user           # add ws connection @clients array         else           # close ws         end       end        ws.on :message |event|         # [..] retrieve current user         redis.current.publish({user_id: user.id, { message: "echo: #{event.data}"}} )       end        ws.rack_response     else       @app.call(env)     end   end   def send_message user_id, message     # logging @clients.length here return correct result     # cs = connections belong client     cs.each { |c| c.send(message.to_json) }   end end 

unicorn (and apparently puma) both start master process , fork 1 or more workers. fork copies (or @ least presents illusion of copying - actual copy happens write pages) entire process thread called fork exists in new process.

clearly app being initialised before being forked - done workers can start , benefit copy on write memory savings. consequence redis checking thread running in master process whereas @clients being modified in child process.

you can work around either deferring creation of redis thread or disabling app preloading, should aware setup prevent scaling beyond single worker process (which puma , thread friendly jvm jruby less of constraint)


Comments

Popular posts from this blog

javascript - gulp-nodemon - nodejs restart after file change - Error: listen EADDRINUSE events.js:85 -

Fatal Python error: Py_Initialize: unable to load the file system codec. ImportError: No module named 'encodings' -

javascript - oscilloscope of speaker input stops rendering after a few seconds -