Stok Footage

Continually experimenting with new ideas and techniques — Reconstructing, Developing, Modernising.

Stumbling Through Elixir

I’m working my way through 30 Days of Elixir at a relaxed pace, and I’m starting to get a sense of the possible differences in approach between the languages I already know (procedural and OO) end Elixir.

Today’s exercise (09-ping.exs) led me to stub my toe on an old OS X irritation, a stingy number of file descriptors per process by default. In an attempt to fix this (see the last few commits on github) I looked into Elixir’s Task.start/3 and Elixir’s try … rescue … end to understand what was going on and to try and fix it respectively.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
  @doc """
  Ping a single IP address returning a tuple which `ping_async` can return.
  """
  def run_ping(ip) do
    # This is a Ruby-ish way of dealing with failure...
    # TODO: Discover the "Elixir way"
    try do
      # return code should be handled somehow with pattern matching
      {cmd_output, _} = System.cmd("ping", ping_args(ip))
      alive? = not Regex.match?(~r/100(\.0)?% packet loss/, cmd_output)
      {:ok, ip, alive?}
    rescue
      e -> {:error, ip, e}
    end
  end

As far as I can tell the root cause of the problem was that spawning 254 OS processes which needed stdin, stdout, and stderr streams. If too many processes are running at the same time then we run out of file descriptors. A fix to this is to up the number of file descriptors by using something like this:

1
ulimit -n 2048

When I have a little more Elixir experience I suspect I’d use Elixir’s processes and supervision, but I’ll discover that another day.

Tags:

Leave a Reply

Your email address will not be published. Required fields are marked *