The OpenShift forums have been retired, but you can still read and search them.
For the most up-to-date information about how to use OpenShift please visit

Haproxy problems after push


I have a problem migrating my single gear application to an scaled one. I have made two attempts, in the first one, after creating my new application and pushing my code I was able to see haproxy status page but no my application page.

After that I erased the new application and tried again. In this second attempt, after pushing my code I see a temporarily unavailable page if I enter my app url. I should say that my code push is a merge from my single app with its .openshift folder. Probably I have overwritten some crucial hook or something related to haproxy.

I have been digging into it and the only thing I can see is that haproxy daemon has a connection error. I can reproduce it if I run 'haproxy_ctld_daemon start', and the error log is:

/usr/libexec/openshift/cartridges/haproxy-1.4/info/bin/haproxy_ctld.rb:123:in `initialize': Connection refused - /var/lib/openshift/6026aa3410a0476e9a39589dee88fced/haproxy-1.4/run/stats (Errno::ECONNREFUSED)
    from /usr/libexec/openshift/cartridges/haproxy-1.4/info/bin/haproxy_ctld.rb:123:in `open'
    from /usr/libexec/openshift/cartridges/haproxy-1.4/info/bin/haproxy_ctld.rb:123:in `refresh'
    from /usr/libexec/openshift/cartridges/haproxy-1.4/info/bin/haproxy_ctld.rb:108:in `initialize'
    from /usr/libexec/openshift/cartridges/haproxy-1.4/info/bin/haproxy_ctld.rb:345:in `new'
    from /usr/libexec/openshift/cartridges/haproxy-1.4/info/bin/haproxy_ctld.rb:345:in `<top (required)>'
    from /opt/rh/ruby193/root/usr/share/gems/gems/daemons-1.0.10/lib/daemons/application.rb:176:in `load'
    from /opt/rh/ruby193/root/usr/share/gems/gems/daemons-1.0.10/lib/daemons/application.rb:176:in `start_load'
    from /opt/rh/ruby193/root/usr/share/gems/gems/daemons-1.0.10/lib/daemons/application.rb:257:in `start'
    from /opt/rh/ruby193/root/usr/share/gems/gems/daemons-1.0.10/lib/daemons/controller.rb:69:in `run'
    from /opt/rh/ruby193/root/usr/share/gems/gems/daemons-1.0.10/lib/daemons.rb:139:in `block in run'
    from /opt/rh/ruby193/root/usr/share/gems/gems/daemons-1.0.10/lib/daemons/cmdline.rb:105:in `call'
    from /opt/rh/ruby193/root/usr/share/gems/gems/daemons-1.0.10/lib/daemons/cmdline.rb:105:in `catch_exceptions'
    from /opt/rh/ruby193/root/usr/share/gems/gems/daemons-1.0.10/lib/daemons.rb:138:in `run'
    from /usr/libexec/openshift/cartridges/embedded/haproxy-1.4/info/bin/haproxy_ctld_daemon.rb:22:in `<main>'

Before doing any push, scaled application works fine. I can see the openshift jboss welcome page.

Could anyone give me a hand on this? Thank you very much!

I have a few more information about this. If I login to my haproxy gear and execute "/etc/init.d/haproxy status" the output is:

haproxy is stopped

And if I run "/etc/init.d/haproxy start" the output is:

Iniciando haproxy: [WARNING] 017/085339 (19416) : [/usr/sbin/haproxy.main()] Cannot raise FD limit to 8017. [ALERT] 017/085339 (19416) : Starting frontend main: cannot bind socket [FALLÓ]

Whats wrong with haproxy after my git push?

Please any help would be much appreciated.

@mitking, what sort of app server are you running + what IP address are you binding to?
Looks like haproxy fails to bind as per your errors:
cannot bind socket [FALLÓ]

Possibly because the app server is binding to the interface + port 8080 that haproxy expects to bind to.
Try fixing that and see if it works (or temporarily try listening to a different port ala 16000 in your app server
to debug and see if haproxy starts up).


@mitking; What's your application URL? I'm wondering what the haproxy status page shows about the framework gear. Usually, the haproxy status page will show up when there are errors in the framework gear. Maybe reviewing the logs would help pinpoint the issue:


Thank you for your responses guys. Let me explain what I have done since my last post. Haproxy was broken so I decided to start again. Now I think I have made the process like it is meant to be:

  • Delete my previous attempt with rhc commas: rhc app delete -a wexserver
  • Create a new scaled application via web interface (jbossas 7.1).
  • Added new cartridge for backend DB: Postgresql 8.4
  • Made tunneling thing in my end to restore my database into the new postgresql created via pgadmin in my workstation.
  • Added new remote to my git repo with my prefered tool: sourcetree.
  • Made a Pull from remote to my local index.
  • Resolved conflicts, all hooks and almost all files "using theirs".
  • Resolve conflicts of root pom.xml and standalone.xml "using mine" and changing new things missing in my files like new environtment variable names, etc. In my old gear jboss variables were XX_JBOSS_XX and now are XX_JBOSSAS_XX. I think I havent left anything.
  • Commit after the merge process.
  • Push code to my remote new git repo.
  • Follow process in logs via ssh tail_all.

Everything seems to be ok, jboss starts normally, validate db schema ok, deploy app and contexts. Haproxy seems to be working too, and it is, because now I see status page. But thats the problem now, if I enter into my app url: I see haproxy status page but no my expected blank page which is what I should see if my app was running.

After all of this I have been looking into logs but I dont see anything relevant, everything seems to be ok. I also have tried to ssh into my proxy gear and from there ssh into jboss gear and make ctl_app restart. Restart went ok but haproxy says my server gear still down.

Im using a https socket binding 8443 in my jboss app, could that be the reason?

@ramr I dont know if Im understood what you suggested. Are you suggesting to change my 8080 binding of jboss to another port? @Nam I hope you could see now something in my haproxy status page.

Thanks a lot, I love the great work you are doing!

Note: I have tested my iOS application against the jboss gear bypassing haproxy and everything seems to work perfect. I only need that my proxy recognize that my jboss is listening. I have review internal ips and ports referenced by haproxy and jboss and everything looks fine for me...

Problem solved!

After studying haproxy doc a little bit and reviewing my app configuration files I have managed to solve the problem.

Everything was ok: ips, ports, etc. But one line called my attention in haproxy.cfg file:

option httpchk GET /

This will send a complete http request to backend in order to see if it is alive or not. My backend is mainly a bunch or rest services for an iOS app and it has two contexts defined but nothing in root path. So that line against my application dont return what haproxy want to see as an application being alive: 2xx and 3xx responses.

My two workarounds have been:

  • remove that line from haproxy.cfg. This works, haproxy make a TCP validation to see if my backend is alive and works ok.
  • add to the context root one of my contexts defined in my ear file like: option httpchk GET /appcontext. This works perfectly too.

I have chosen the second approach. What do you guys think would be the best solution? And a second important question, is this file (haproxy.cfg) overwritten with openshift core updates? If so, this file should be in our .openshift folder, doesn´t it?

And of course I hope this information is valuable for you.

As always, thank you!

Thank you for posting your solution @mitking

It helped me solve a similar problem I had with my app not working on scale mode because I had no routes defined in my root path. HAProxy would ping the base URL and it would return a 404. Therefore HAProxy thought my app was unavailable and was returning a 503.

In my node.js express app I added a generic base route so when HAProxy pings the app it will return a 200.

app.get('/', sites.base);

Glad to hear that, @petar ;)