elixir Archives - ProdSens.live https://prodsens.live/tag/elixir/ News for Project Managers - PMI Sat, 02 Mar 2024 18:20:42 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://prodsens.live/wp-content/uploads/2022/09/prod.png elixir Archives - ProdSens.live https://prodsens.live/tag/elixir/ 32 32 Is Elixir or Common Lisp the best language for building a bootstrapped B2B SaaS in 2024? https://prodsens.live/2024/03/02/is-elixir-or-common-lisp-the-best-language-for-building-a-bootstrapped-b2b-saas-in-2024/?utm_source=rss&utm_medium=rss&utm_campaign=is-elixir-or-common-lisp-the-best-language-for-building-a-bootstrapped-b2b-saas-in-2024 https://prodsens.live/2024/03/02/is-elixir-or-common-lisp-the-best-language-for-building-a-bootstrapped-b2b-saas-in-2024/#respond Sat, 02 Mar 2024 18:20:42 +0000 https://prodsens.live/2024/03/02/is-elixir-or-common-lisp-the-best-language-for-building-a-bootstrapped-b2b-saas-in-2024/ is-elixir-or-common-lisp-the-best-language-for-building-a-bootstrapped-b2b-saas-in-2024?

There was this article and discussion on Lobster last month. https://lobste.rs/s/ysam1j/why_elixir_is_best_language_for_building I’ll give you the key takeaway right…

The post Is Elixir or Common Lisp the best language for building a bootstrapped B2B SaaS in 2024? appeared first on ProdSens.live.

]]>
is-elixir-or-common-lisp-the-best-language-for-building-a-bootstrapped-b2b-saas-in-2024?

There was this article and discussion on Lobster last month.

I’ll give you the key takeaway right now:

the best language/tooling to start a SaaS with is the one you already know.

Now let’s discuss some more.

I want to be convinced and find the Graal, but I’m still torn between Python, and now Common Lisp. When I take a look to maybe rewrite my Python/Django app to Elixir, I miss stuff: no admin dashboard (unless you pay 300USD), no automatic DB migrations unlike Django and CL’s Mito (yet I don’t have Ecto experience so maybe it’s a non issue) (I’ve been point to Ecto migrations, that’s a valid point), a lot of code generation (they rot, don’t they?)…

LiveView? I can use HTMX, its websockets extension, Unpoly, and they are cross-stack. No compile-time type checks, like Python, unlike CL; a deployment story in-between Python (dangerous) and CL (build and send a binary) and, because I am now spoiled with CL’s richness of image-based interactive features, I see much less of them in Elixir (no “compile this function ad get warnings”). And oh, there’s some syntax to pay attention to again, those { % => etc. Elixir’s Emacs modes don’t look good when Slime or Sly offer a ton of features. With Elixir I’m back at the terminal which feels like a regression.

I’m spoiled.

So, is Elixir really the best language for building a bootstrapped SaaS? Python/Django, I admit, despite their flaws, have arguments. For a solo developer, Common Lisp is hyper productive. Its web offering is minimal but if you know the web, plug in a DB, HTMX, a login system and you’re on tracks. CL won’t have shiny dashboards (wait, does it? Grafana dashboard for SBCL and Hunchentoot: memory, threads, requests per second, GC state…) and a supervision tree (only ruricolist/moira to monitor and restart background threads), but you can get a GenServer-inspired actor library (mdbergmann/sento), and I believe they share runtime features: efficiency, live reload is doable. Last but not last, CL is maybe easier to use for other tasks: ingest data efficiently (SBCL is fast), small-ish binaries, scripts (now easier with my CIEL helper).

Elixir is absolutely more shiny and enterprise ready and I don’t know what I miss, but there’s nothing ideal for a rewrite…

Now, after this comment of mine I’ve been pointed to Torch and Ecto’s Gen.Migration, the two looking super useful. Good points. I also started to write my CRUD admin dashboard for Common Lisp, let’s see how this goes…

The post Is Elixir or Common Lisp the best language for building a bootstrapped B2B SaaS in 2024? appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/03/02/is-elixir-or-common-lisp-the-best-language-for-building-a-bootstrapped-b2b-saas-in-2024/feed/ 0
For, Map and Reduce in Elixir https://prodsens.live/2023/11/02/for-map-and-reduce-in-elixir/?utm_source=rss&utm_medium=rss&utm_campaign=for-map-and-reduce-in-elixir https://prodsens.live/2023/11/02/for-map-and-reduce-in-elixir/#respond Thu, 02 Nov 2023 20:24:32 +0000 https://prodsens.live/2023/11/02/for-map-and-reduce-in-elixir/ for,-map-and-reduce-in-elixir

Introduction One of the students in my Introduction to Functional Programming course recently submitted a code snippet. It…

The post For, Map and Reduce in Elixir appeared first on ProdSens.live.

]]>
for,-map-and-reduce-in-elixir

Introduction

Run in Livebook

One of the students in my Introduction to Functional Programming course recently submitted a code snippet. It became evident that they assumed Elixir’s ‘for’ construct operates similarly to ‘for’ loops in non-functional programming languages. However, this is not the case, as Elixir’s ‘for’ is fundamentally different in its behavior.

What’s a list comprehension?

The command ‘for’ in Elixir is a list comprehension. The result of a ‘for’ is a list.

For instance, in the example below, ‘i’ goes from one to ten. The result is a list containing each value of ‘i’ multiplied by 10.

for i <- 1..10 do
  i * 10
end
[10, 20, 30, 40, 50, 60, 70, 80, 90, 100]

You could do the same using Enum.map/2.

1..10
|> Enum.map(fn x -> x * 10 end)
[10, 20, 30, 40, 50, 60, 70, 80, 90, 100]

If, instead, you wanted the result of the sum of all the values of the list, you would have two options.

The first one is to use Enum.sum() to sum all values of the resulting list.

1..10
|> Enum.map(fn x -> x * 10 end)
|> Enum.sum()
550

The second option is to use Enum.reduce/2:

1..10
|> Enum.reduce(fn x, accum -> x * 10 + accum end)
541

What if I wanted to multiply all values of the resulting list? The following solution would not work.

1..10
|> Enum.reduce(fn x, accum -> x * 10 * accum end)
3628800000000000

Because this is the value of 10 *… * 100:

10 * 20 * 30 * 40 * 50 * 60 * 70 * 80 * 90 * 100
36288000000000000

The correct way is:

1..10
|> Enum.reduce(1, fn x, accum -> x * 10 * accum end)
36288000000000000

What’s the difference between Enum.reduce/2 and Enum.reduce/3?

Back to for

‘For’ allows you to have more than one generator (the ‘i <- 1..10' part):

for i <- 1..3, j <- ["Brasil", "Mexico", "Angola"] do
  {:number, i, :country, j}
end
[
  {:number, 1, :country, "Brasil"},
  {:number, 1, :country, "Mexico"},
  {:number, 1, :country, "Angola"},
  {:number, 2, :country, "Brasil"},
  {:number, 2, :country, "Mexico"},
  {:number, 2, :country, "Angola"},
  {:number, 3, :country, "Brasil"},
  {:number, 3, :country, "Mexico"},
  {:number, 3, :country, "Angola"}
]

You can also add filters:

require Integer

for i <- 1..3,
    j <- ["Brasil", "Mexico", "Angola"],
    Integer.is_even(i),
    String.starts_with?(j, "B") do
  {:number, i, :country, j}
end
[{:number, 2, :country, "Brasil"}]

There are many more things that you can do with ‘for’, ‘map’ and ‘reduce’. Explore Elixir’s docs to learn more!

The post For, Map and Reduce in Elixir appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/11/02/for-map-and-reduce-in-elixir/feed/ 0
Handling state between multiple processes with elixir https://prodsens.live/2023/09/26/handling-state-between-multiple-processes-with-elixir/?utm_source=rss&utm_medium=rss&utm_campaign=handling-state-between-multiple-processes-with-elixir https://prodsens.live/2023/09/26/handling-state-between-multiple-processes-with-elixir/#respond Tue, 26 Sep 2023 00:25:16 +0000 https://prodsens.live/2023/09/26/handling-state-between-multiple-processes-with-elixir/ handling-state-between-multiple-processes-with-elixir

Elixir works really well for concurrent code because of it’s functional nature and ability to run in multiple…

The post Handling state between multiple processes with elixir appeared first on ProdSens.live.

]]>
handling-state-between-multiple-processes-with-elixir

Elixir works really well for concurrent code because of it’s functional nature and ability to run in multiple processes, but how we handle state when our code is running all over the place? Well, there is some techniques and in this article we’ll learn more about it together shall we?

Table of contents

  • What is a process? How to use it with send and receive
  • Incrementing our experience with tasks
  • Designing state with the agent wrapper
  • Conclusion

What is a process? How to use it with send and receive

image

Processes are the answer from Elixir to concurrent programming; they’re basically a continuous-running node that can send and receive messages. In fact, every function in Elixir runs inside a process. Although this sounds really expensive, it’s super lightweight compared to threads in other languages, which empowers us developers to build incredibly scalable software with hundreds of processes running at the same time. Another great advantage of using this specifically with the Elixir language is that this language is built on top of immutability and other functional programming concepts, so we can trust that these functions are running completely isolated and without changing or maintaining global state.

The basic way of seeing a process in action is by using the spawn function, with that we can execute a function in a process and get the pid of it.

iex(3)> pid = spawn(fn -> IO.puts("teste") end)
teste
#PID<0.111.0>
iex(4)> pid
#PID<0.111.0>
iex(5)> Process.alive?(pid)
false
iex(6)>

As you can see from the return of Process.alive?(pid) this process is already dead once it run correctly, but we can easily add a sleep function to check this mechanism:

iex(2)> pid = spawn(fn -> :timer.sleep(10000); IO.puts("teste") end)
#PID<0.111.0>
iex(3)> Process.alive?(pid)
true
teste
iex(4)> Process.alive?(pid)
false
iex(5)>

Since we’re sleeping for 10 seconds, the process is alive until it run after the sleep and dies. Cool right? It’s important to know that our main program did not hang, it simply put the function in a process and forgot there, this allow us to create really modular and performant code abusing to run on multiple nodes.

Besides spawning functions in a process, we can transition information between processes using the functions send and the receive block as shown below:

iex(1)> defmodule Listener do
...(1)> def call do
...(1)> receive do
...(1)> {:hello, msg} -> IO.puts("Received: #{msg}")
...(1)> end
...(1)> end
...(1)> end
{:module, Listener,
 <<70, 79, 82, 49, 0, 0, 6, 116, 66, 69, 65, 77, 65, 116, 85, 56, 0, 0, 0, 240,
   0, 0, 0, 25, 15, 69, 108, 105, 120, 105, 114, 46, 76, 105, 115, 116, 101,
   110, 101, 114, 8, 95, 95, 105, 110, 102, 111, ...>>, {:call, 0}}
iex(2)> pid = spawn(&Listener.call/0)
#PID<0.115.0>
iex(3)> send(pid, {:hello, "Hello World"})
Received: Hello World
{:hello, "Hello World"}
iex(4)>

Observe that we define a function that act as a general listener using the receive block, this work as a switch case where we can pattern match and do a quick action, in this case we’re simply printing to the STDOUT. Once we spawn this listener, it’s possible to use the returned pid to send information using the send/2 function that expects a PID and a value as arguments.

That way, it’s possible to keep state in an immutable and separate environment such as elixir.

Incrementing our experience with tasks

This module offers an abstraction on top of the spawn function while adding support for asynchronous behavior, i.e. creating function in a separate process and observing it’s behavior with wait functions. As you delve into Elixir, you’ll discover that the Task module allows you to start a new process that executes a function and returns a task structure. With this structure in hand, you can easily get the value from this function using the Task.await(task) clause, as shown below:

iex(1)> task = Task.async(fn ->
...(1)>   IO.puts("Task is running")
...(1)>   42
...(1)> end)
Task is running
%Task{
  mfa: {:erlang, :apply, 2},
  owner: #PID<0.109.0>,
  pid: #PID<0.110.0>,
  ref: #Reference<0.0.13955.659691257.723058689.43945>
}
iex(2)> IO.puts "a code"
a code
:ok
iex(3)> answer_to_everything = Task.await(task)
42
iex(4)> answer_to_everything
42
iex(5)>

First we saw the Task is running message printed out and then we got the task struct, further we could execute any code in between and when we’re ready is just a matter of using the Task.await function to retrieve the function return.

Task also provide a common interface for the regular spawn function called start, we can even reutilize the code shown on the beginning with the new module abstraction:

iex(1)> defmodule Listener do
...(1)> def call do
...(1)> receive do
...(1)> {:print, msg} -> IO.puts("Received message: #{msg}")
...(1)> end
...(1)> end
...(1)> end
{:module, Listener,
 <<70, 79, 82, 49, 0, 0, 6, 244, 66, 69, 65, 77, 65, 116, 85, 56, 0, 0, 0, 245,
   0, 0, 0, 26, 15, 69, 108, 105, 120, 105, 114, 46, 76, 105, 115, 116, 101,
   110, 101, 114, 8, 95, 95, 105, 110, 102, 111, ...>>, {:call, 0}}
iex(2)> {:ok, pid} = Task.start(&Listener.call/0)
{:ok, #PID<0.115.0>}
iex(3)> send(pid, {:print, "Eat more fruits"})
Received message: Eat more fruits
{:print, "Eat more fruits"}

It’s useful to use the Task module because we can get a higher level of abstraction, you must have noticed that the interface for Task.start and Task.async is the same? Yeah we can swap those and get the power of using Task.await and Task.yield on top of it, that’s the power of abstracting lower level concepts!

Designing state with the agent wrapper

image

The Agent module provide another layer of abstraction focused on controlling state between multiple instances of a process, it act like a data structure for long running interactions.

We can first start a agent instance with a initial value passed from a function return as shown below:

iex(1)> {:ok, agent} = Agent.start_link(fn -> [] end)
{:ok, #PID<0.110.0>}
iex(2)> agent
#PID<0.110.0>
iex(3)>

As you can see we get a PID just like the other abstractions, the difference here can be observed on the usage of other methods

For example we can update the original array by appending a value to it:

iex(3)> Agent.update(agent, fn list -> ["elixir" | list] end)
:ok
iex(4)>

That’s the whole difference of the agent abstraction, we can continuously update a state by appending immutable functions as callback and reusing the same PID.

We also can return a particular value from the data structure by using the following function:

iex(4)> Agent.get(agent, fn list -> list end)
["elixir"]
iex(5)>

See? it’s as simple as returning the whole list from the callback function, you can imagine that it’s possible to use any method from elixir to filter down this list if wanted and keep iterating over the data structure

Conclusion

This is a simple introduction to this concept that is new for me, I hope it’s useful for anyone reading it! And in the next articles we’ll dive deeper about other topics in elixir such as Gen Servers, Supervisors, etc…

The post Handling state between multiple processes with elixir appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/09/26/handling-state-between-multiple-processes-with-elixir/feed/ 0
O que é um processo em Elixir? https://prodsens.live/2023/03/04/o-que-e-um-processo-em-elixir/?utm_source=rss&utm_medium=rss&utm_campaign=o-que-e-um-processo-em-elixir https://prodsens.live/2023/03/04/o-que-e-um-processo-em-elixir/#respond Sat, 04 Mar 2023 16:05:42 +0000 https://prodsens.live/2023/03/04/o-que-e-um-processo-em-elixir/ o-que-e-um-processo-em-elixir?

Para aprender o que é um processo em Elixir, antes você tem que saber o que é uma…

The post O que é um processo em Elixir? appeared first on ProdSens.live.

]]>
o-que-e-um-processo-em-elixir?

Para aprender o que é um processo em Elixir, antes você tem que saber o que é uma função.

Por exemplo, IO.puts/1 é uma função que escreve algo na tela. A função se chama puts, ela está no módulo IO (de entrada I e saída O) e ela recebe um argumento:

iex(1)> IO.puts("Adolfo")
Adolfo
:ok

Se tudo dá certo ela retorna o átomo :ok.
Uma função que te permite gerar (tradução talvez imprecisa de spawn, em inglês) é Kernel.spawn/1.

Novamente, o nome dela é spawn, ela faz parte do módulo Kernel e recebe um argumento.
Eu consigo fazer

Kernel.spawn(IO.puts("Adolfo"))

?
Não!

Vamos começar simplificando: toda função que faz parte do módulo Kernel não precisa do nome do módulo antes.
Basta chamar

spawn(IO.puts("Adolfo"))

Vai continuar errado mas com menos letras.
O que Kernel.spawn/1 recebe é uma função de aridade 0, ou seja, que não recebe nenhum argumento.
Como fazemos isso?
Assim:

fn -> 1 end

A função acima não tem nome (anônima) e retorna 1.

Mas veja na imagem que se você dá um spawn nela, nada de interessante acontece:

Image description

Em primeiro lugar, veja que

fn -> 1 end

retornou uma espécie de “código” que identifica a função:

#Function<43.3316493/0 in :erl_eval.expr/6>

E

spawn(fn -> 1 end)

retornou um PID, um Process IDentifier, um identificador de processo:

#PID<0.120.0>

Eu posso atribuir este PID a uma variável:

iex(1)> pid = spawn(fn -> 1 end)
#PID<0.110.0>

E depois perguntar se o processo que foi gerado está vivo:

iex(2)> Process.alive?(pid)
false

Não está pois era uma função muito rápida, que só retornava 1.
Eu posso, por exemplo, fazer o processo “dormir” por 10 segundos antes de retornar o 1.

iex(3)> pid = spawn(fn -> Process.sleep(10000); 1 end)
#PID<0.113.0>

Se rapidamente eu pergunto se o processo, cujo identificador está na variável pid, está vivo, a resposta é sim.

iex(4)> Process.alive?(pid)
true

Mas se pergunto novamente depois de 10 segundos, a resposta será não.

iex(5)> Process.alive?(pid)
false

Se eu fizer isto aqui

iex(6)> pid = spawn(fn -> Process.sleep(10000); IO.puts("Adolfo") end)
#PID<0.117.0>
Adolfo

vai demorar 10 segundos para “Adolfo” ser escrito na tela.

Já se eu fizer isto, será imediato para “Adolfo” aparecer na tela.

iex(7)> pid = spawn(fn -> IO.puts("Adolfo") end)
Adolfo
#PID<0.119.0>

Enfim, isto é somente o básico do básico. Leia mais em
https://elixirschool.com/pt/lessons/intermediate/concurrency
Ou no Getting Started da linguagem Elixir
https://elixir-lang.org/getting-started/processes.html

Fiz tudo isso sem sequer mencionar send e receive.

Finalizando: um processo em Elixir é uma unidade de processamento que executa uma função.

The post O que é um processo em Elixir? appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/03/04/o-que-e-um-processo-em-elixir/feed/ 0
Purity injection in Elixir https://prodsens.live/2022/10/19/purity-injection-in-elixir/?utm_source=rss&utm_medium=rss&utm_campaign=purity-injection-in-elixir https://prodsens.live/2022/10/19/purity-injection-in-elixir/#respond Wed, 19 Oct 2022 23:05:06 +0000 https://prodsens.live/2022/10/19/purity-injection-in-elixir/ purity-injection-in-elixir

If you came to Elixir from Ruby, like I did, you have probably been looking for a way…

The post Purity injection in Elixir appeared first on ProdSens.live.

]]>
purity-injection-in-elixir

If you came to Elixir from Ruby, like I did, you have probably been looking for a way to do dependency injection in Elixir. I know I did. I also know it’s not that simple and I was never really satisfied with the results. After few years of looking at different options I arrived to perhaps surprising conclusion: I don’t need it (at least not most of the times).

How so?

Dependency injection is especially useful in testing. It allows to swap a dependency of code which is slow or unstable with something faster and more predictable. Is DI the only means of ding it, though?

Thinking (more) functional

Elixir is a functional language, even if you Haskell friends do not necessarily agree with this. In recent months, I have begun to understand how helpful is is to step back from object-oriented thinking (inherited – pun intended – from Ruby) and attempt to approach problems from more functional perspective.

In functional languages there’s always talk about pure vs impure functions. Ideally, all functions should be pure, but software exists in a very impure context, so it’s not feasible. But that does not mean we should give up on purity. On the contrary, it’s generally a good idea to avoid letting impurity seep into every part of our code and to attempt to write as much pure code as we can.

Just for the sake of alignment, I’m talking about pure functions, which are functions with the following properties:

  • For the same set of arguments they always return the same value
  • They don’t mutate any kind of a global state (produce side-effects)

We will be mostly concentrating an the first point. It seems simple. After all, if you want to calculate a total order price from order items, you just make some multiplication and addition – that’s pure. But there are always some dark corners of the codebase where this is much, much harder. Now let’s examine one of these.

Generating order number

In one of the projects I was working on we had to write an order numbers generator. For each incoming order, it was supposed to create a new set of letters and digits, which is:

  • Unique for given tenant
  • Human-friendly (i.e. no “O vs 0”)
  • Hard to guess (i.e. no monotonic sequences)

The first attempt of implementation was like this:

defmodule OrderNumber do
  def generate(tenant_id) do
    candidate = 
      :crypto.strong_rand_bytes(4) 
      |> Base.encode32(padding: false) 
      |> replace_ambiguous_characters()

    query = 
      from o in Order, 
      where: o.tenant_id == ^tenant_id and o.order_id == ^candidate

    if Repo.exists?(query), do: generate(tenant_id), else: candidate
  end
end

This looks good, until you try to test it. Basically, the only thing you can test here is that it returns some sort of 7-characters long string devoid of letters O and I (they have been converted to numbers 0 and 1 respectively by replace_ambiguous_characters/1 function). But is it a good test? Are Os and Is missing because we have replaced them or because they weren’t included in the initial random string? We need more control over the execution in order to increase tests reliability.

In a classic OOP-minded dependency injection we would try to pass in some dependencies, i.e. nouns. In this case, the candidates are: random number generator and the repository. Let’s try this:

defmodule OrderNumber do
  def generate(tenant_id, rng \ :crypto, repo \ Repo) do
    candidate = 
      rng.strong_rand_bytes(4) 
      |> Base.encode32(padding: false) 
      |> replace_ambiguous_characters()

    query = 
      from o in Order, 
      where: o.tenant_id == ^tenant_id and o.order_id == ^candidate

    if repo.exists?(query), do: generate(tenant_id), else: candidate
  end
end

Okay, this wasn’t that bad. But now we need to craft some special modules to pass in as dependencies in tests. Elixir does not make it easy for us. You basically have to define them upfront – and the worst part: you have to name them.

What if instead of injecting nouns (dependencies) we inject verbs (functions)? After all, functions should be first-class citizens of a functional code. While I know there are some discussions about the dot-notation ruining it for Elixir, we still can do it relatively easy. Let’s see this in action.

defmodule OrderNumber do
  def generate(tenant_id, opts \ []) do
    generate_random = Keyword.get(opts, :generate_random, fn -> 
      :crypto.strong_rand_bytes(4) 
      |> Base.encode32(padding: false) 
    end)

    check_existence = Keyword.get(opts, :check_existence, fn tenant_id, candidate ->
      from(o in Order, 
      where: o.tenant_id == ^tenant_id and o.order_id == ^candidate)
      |> Repo.exists?()
    end)

    candidate = 
      generate_random.()
      |> replace_ambiguous_characters()

    if check_existence.(tenant_id, candidate),
      do: generate(tenant_id, opts), else: candidate
  end
end

With that, we can easily test if the ambiguous characters (O, I) are replaced by simply passing a function returning a test-worthy string:

test "replace Os and Is" do
  generator = fn -> "ABCO0I1" end
  assert OrderNumber.generate(@tenant_id, generate_random: generator) == "ABC0011"
end

It is a bit more tricky with the remaining condition – to generate the candidate again if the order number is already taken. We can solve it in two ways: create an existence checker that returns true when called the first time and false the second time or create a “random” generator that returns values we want in sequence. Either way, we need to introduce a “controlled impurity”, i.e. function will modify external state, but unlike a database, this state will be local to the test run. Personally I strongly favour the second option, as it tests cooperation between the two injected functions as well.

test "generate another candidate when first one is already taken" do
  {:ok, agent} = Agent.start_link(fn -> {0, ["ABC123", "XYZ555"]} end)
  generator = fn ->
    Agent.update(agent, fn {idx, list} -> {idx + 1, list} end)
    Agent.get(agent, fn {idx, list} -> Enum.at(list, idx - 1) end)
  end

  exists = fn _, number -> number == "ABC123" end

  assert OrderNumber.generate(
    @tenant_id, 
    generate_random: generator, 
    check_existence: exists
  ) == "XYZ555"
end

The question remains: is OrderNumber.generate/2 pure now? Since we inverted the control, it depends on the caller. By default it is not pure, calling random number generator and the database. However, by passing in pure (or “controlled pure”) functions as opts we can make it pure, which is super-useful for testing.

Final touches

Just for the sake of the code readability, I recommend to make some more changes to OrderNumber module. The generate functions does not need to include the meaty details of calling the database or generating random strings, so we can extract these are private functions. With that, the main function looks like this:

defmodule OrderNumber do
  def generate(tenant_id, opts \ []) do
    generate_random = Keyword.get(opts, :generate_random, &generate_random/0)
    already_taken? = Keyword.get(opts, :check_existence, &already_taken?/2)

    candidate = 
      generate_random.()
      |> replace_ambiguous_characters()

    if already_taken?.(tenant_id, candidate),
      do: generate(tenant_id, opts), else: candidate
  end
end

With that the low-level concerns about database structure or RNG method chosen are hidden and the function istelf is much better at simply telling what it does, step by step.

Summary

By adjusting the mindset to stop thinking about dependencies and start to think about behaviours (functions) we were able to extract the impure parts of the number generator function. Then, by making them injectable we transformed the impure function to the one into which we can inject purity in tests, making it essentially pure and thus much easier to test. We also didn’t need any fancy tool like Mox, Mimic or Rewire to define replacement modules for us. The code is hopefully understandable and uses only built-in Elixir idioms, without macros.

The post Purity injection in Elixir appeared first on ProdSens.live.

]]>
https://prodsens.live/2022/10/19/purity-injection-in-elixir/feed/ 0
Elixir’s DBConnection Pooling Deep Dive https://prodsens.live/2022/10/14/elixirs-dbconnection-pooling-deep-dive/?utm_source=rss&utm_medium=rss&utm_campaign=elixirs-dbconnection-pooling-deep-dive https://prodsens.live/2022/10/14/elixirs-dbconnection-pooling-deep-dive/#respond Fri, 14 Oct 2022 21:13:41 +0000 https://prodsens.live/2022/10/14/elixirs-dbconnection-pooling-deep-dive/ elixir’s-dbconnection-pooling-deep-dive

DBConnection.ConnectionError: connection not available and request was dropped from queue after 2290ms. This means requests are coming in…

The post Elixir’s DBConnection Pooling Deep Dive appeared first on ProdSens.live.

]]>
elixir’s-dbconnection-pooling-deep-dive

DBConnection.ConnectionError: connection not available and request was dropped from queue after 2290ms. This means requests are coming in and your connection pool cannot serve them fast enough. You can address this by:

  1. By tracking down slow queries and making sure they are running fast enough
  2. Increasing the pool_size (albeit it increases resource consumption)
  3. Allow requests to wait longer by increasing :queue_target and :queue_interval

See DBConnection.start_link/2 for more information

It is pretty common to run unto this error as an Elixir developer working with relational databases. The suggested fixes definitely help, but personally I feel bad tweaking something I don’t understand. How does changing pool_size actually affect the rest of the application? I want to keep my mysteries to Saturday morning cartoons, not production. So I decided to dig in and see how pooling works.

DBConnection is a foundation Elixir library powering many database specific adapters. Most of the time you don’t need to know much about it, since the adapter libraries abstract away most of its inner workings.

While looking into some performance issues, my team realized we didn’t have a shared mental modal of how pooling works with DBConnection. It is largely undocumented, and a quick glance at the code wasn’t enough to let us know how it worked. So, like any rational engineer, I went way too deep in the code and traced through everything that was happening for pooled database connections.

This writeup uses Postgrex as the example adapter, but nothing about the explanation is specific to PostgreSQL. The pooling works exactly the same for any other library using DBConnection.

If you want to try the code snippets yourself, check out the Livebook version of this post.

Database setup

First comes the easy part! We’ll start a database connection using a minimal set of options.

(To run the code below, you’ll need postgres running locally.)

# Setup a database connection
db_opts = [
  database: "postgres",
  hostname: "localhost",
  password: "postgres",
  pool_size: 1,
  port: 5432,
  username: "postgres"
]

{:ok, conn} = Postgrex.start_link(db_opts)

What just happened?

In the configuration code above, a process is created and returned to the caller. A ton of work happened in the DBConnection library as a result – let’s dive into it!

First, Postgrex.start_link/1 calls DBConnection.start_link/2 1

def start_link(opts) do
  ensure_deps_started!(opts)
  opts = Postgrex.Utils.default_opts(opts)
  DBConnection.start_link(Postgrex.Protocol, opts)
end

The child spec returned by DBConnection is polymorphic 2. It looks for a :pool module specified in the opts and returns the result of that module’s child_spec/1 function. For the default DBConnection.ConnectionPool module, this is just a wrapper around GenServer.child_spec/1 which specifies to call start_link/1 with the parameter {Postgrex.Protocol, opts}.

So the process created and returned by Postgrex.start_link/1 will be a DBConnection.ConnectionPool GenServer. Done!

No, not done.

DBConnection.ConnectionPool.init/1 creates a protected ETS table to handle queueing of queries. The table identifier is kept in the process’s state, so we can look it up and read from it. The process also keeps track of whether it is :busy or :ready, initially starting as :busy. More on that later.

ConnectionPool state

# Get the ETS identifier from the process state
{_tag, queue, _codel, _timestamp} = :sys.get_state(conn)
{:busy, #Reference<0.803399952.2529296391.184107>,
 %{
   delay: 0,
   idle: #Reference<0.803399952.2529165319.184122>,
   idle_interval: 1000,
   interval: 1000,
   next: -576460751694,
   poll: #Reference<0.803399952.2529165319.184121>,
   slow: false,
   target: 50
 }, {-576460751695135000, 0}

Process tree

When the :db_connection application is first started, DBConnection.ConnectionPool.Supervisor is run as a dynamic supervisor. 3 This happens automatically when db_connection is included as a mix dependency.

Supervision tree

ConnectionPool setup

The DBConnection.ConnectionPool process does a few things when it is initializing. Mostly it sets up timers for things like idle timeouts. It also calls DBConnection.ConnectionPool.Pool.start_supervised/3 with the ETS queue it created. 4

def init({mod, opts}) do
  DBConnection.register_as_pool(mod)

  queue = :ets.new(__MODULE__.Queue, [:protected, :ordered_set])
  ts = {System.monotonic_time(), 0}
  {:ok, _} = DBConnection.ConnectionPool.Pool.start_supervised(queue, mod, opts)

  # ...snip...
end

This causes the DBConnection.Watcher GenServer process (running under the application supervision tree) to create a DBConnection.ConnectionPool.Pool process as a dynamically supervised child of DBConnection.ConnectionPool.Supervisor.

Side note: normally the new process would be only be linked to its parent supervisor, but DBConnection.Watcher will monitor the calling DBConnection.ConnectionPool process and terminate the new process when the calling process stops. This effectively links the two processes together.

DBConnection.ConnectionPool.Pool is itself another supervisor. When its init/1 function is called, the :pool_size option is checked with a default of 1. This parameter is passed all the way down from the original Postgrex.start_link/1 call and determines how many children the supervisor will have. 5

def init({owner, tag, mod, opts}) do
  size = Keyword.get(opts, :pool_size, 1)
  children = for id <- 1..size, do: conn(owner, tag, id, mod, opts)
  sup_opts = [strategy: :one_for_one] ++ Keyword.take(opts, [:max_restarts, :max_seconds])
  Supervisor.init(children, sup_opts)
end

Each child in the pool is a DBConnection.Connection, which is an implementation of the Connection behaviour. 6 The adapter can specify callbacks to run after the connection is established if it chooses to.

Connection state

[{_, pool_sup, :supervisor, _}] =
  DynamicSupervisor.which_children(DBConnection.ConnectionPool.Supervisor)

[{_, pool_conn, :worker, _}] = Supervisor.which_children(pool_sup)
%Connection{mod_state: state} = :sys.get_state(pool_conn)
state
%{
  after_connect: nil,
  after_connect_timeout: 15000,
  backoff: %DBConnection.Backoff{max: 30000, min: 1000, state: {1000, 10000}, type: :rand_exp},
  client: {#Reference<0.803399952.2529296391.185395>, :pool},
  connection_listeners: [],
  mod: Postgrex.Protocol,
  opts: [
    pool_index: 1,
    types: Postgrex.DefaultTypes,
    database: "postgres",
    hostname: "localhost",
    password: "postgres",
    pool_size: 1,
    port: 5432,
    username: "postgres"
  ],
  pool: #PID<0.255.0>,
  state: %Postgrex.Protocol{
    buffer: "",
    connection_id: 88094,
    connection_key: 401165952,
    disconnect_on_error_codes: [],
    null: nil,
    parameters: #Reference<0.803399952.2529165319.184202>,
    peer: {{127, 0, 0, 1}, 5432},
    ping_timeout: 15000,
    postgres: :idle,
    queries: #Reference<0.803399952.2529296391.184199>,
    sock: {:gen_tcp, #Port<0.10>},
    timeout: 15000,
    transactions: :naive,
    types: {Postgrex.DefaultTypes, #Reference<0.803399952.2529296391.184175>}
  },
  tag: #Reference<0.803399952.2529296391.184107>,
  timer: nil
}

Each Connection process keeps track of several things in its state. The pool key contains the PID of the DBConnection.ConnectionPool process – the one that was returned from Postgrex.start_link/1. The tag key contains the table identifier for the ETS table used as a queue.

Most of the state is generic, but the nested state key is specific to the calling library – Postgrex in this case.

Adding the connection to the pool

After the connection is established, DBConnection.Holder.update/4 is called. 7 This creates a new public ETS table and gives ownership of it to the DBConnection.ConnectionPool process.

def update(pool, ref, mod, state) do
  holder = new(pool, ref, mod, state)

  try do
    :ets.give_away(holder, pool, {:checkin, ref, System.monotonic_time()})
    {:ok, holder}
  rescue
    ArgumentError -> :error
  end
end

When the ownership message is received, the ConnectionPool process will check its queue and try to pull off the first message. 8 Since the queue starts empty, this is a noop and the pool will just transition from :busy to :ready. An entry of {timestamp, holder} is inserted into the queue where holder is a reference to the ETS table created by DBConnection.Holder.

When the process is in its :ready state, the ETS queue will always contain either this holder tuple or nothing. When the process is :busy, the queue will instead have queries waiting for an available connection.

# A single entry exists in the queue that contains a reference to the holder ETS
:ets.match(queue, :'$1')
[
  [{{-576460431686553007, #Reference<0.803399952.2529296391.185475>}}]
]

Pool size

So, what happens if we change the pool size? Extra DBConnection.Connection processes are spawned under theDBConnection.ConnectionPool.Pool supervisor. They all share the same ETS queue and reference the same DBConnection.ConnectionPool process.

{:ok, conn2} = db_opts |> Keyword.put(:pool_size, 2) |> Postgrex.start_link()

# ConnectionPool.Supervisor children - one for each start_link call
Supervisor.count_children(DBConnection.ConnectionPool.Supervisor)
%{active: 2, specs: 2, supervisors: 2, workers: 0}

[_, {_, pool_sup, :supervisor, _} | _] =
  DynamicSupervisor.which_children(DBConnection.ConnectionPool.Supervisor)

# ConnectionPool.Pool children - one for each pooled connection
Supervisor.count_children(pool_sup)
%{active: 2, specs: 2, supervisors: 0, workers: 2}

Supervision tree with two pools

The holder ETS pattern is repeated for each underlying connection in the pool. The single DBConnection.ConnectionPool process that is returned by start_link contains a single queue, and for each pooled connection an entry in the queue is inserted with holder ETS table that references a single underlying connection.

# View the queue with two pooled connections
{_tag, queue, _codel, _timestamp} = :sys.get_state(conn2)
:ets.match(queue, :'$1')
[
  [{{-576453306038052131, #Reference<0.1613371477.1188167681.181386>}}],
  [{{-576453306037963131, #Reference<0.1613371477.1188167681.181387>}}]
]

Queueing and querying

Now that we understand the processes, the remaining question is how queries get queued and sent to the underlying connections. Postgrex.query/4 takes our PID as its first argument. That gets passed to DBConnection.prepare_execute/4 9 – I’ll spare you the full stacktrace but eventually DBConnection.Holder.checkout_call/5 is called.

This function sends a message to the DBConnection.ConnectionPool process asking to checkout a connection. 10 One of the parameters sent is whether queueing is allowed, which is controlled by the :queue parameter and defaults to true. If the DBConnection.ConnectionPool process is in a :busy state then the request is either inserted into the ETS queue or rejected, depending on the :queue parameter. Assuming the process is :ready, the first holder tuple is deleted from the queue and ownership of that holder ETS reference is given away to the calling process. If no more holder entries are available, the process is marked as :busy.

DBConnection.Holder waits for the message from the ETS ownership transfer (within the calling process). It marks the connection contained within the holder ETS entry as locked and returns the connection.

receive do
  {:"ETS-TRANSFER", holder, pool, {^lock, ref, checkin_time}} ->
    Process.demonitor(lock, [:flush])
    {deadline, ops} = start_deadline(timeout, pool, ref, holder, start)
    :ets.update_element(holder, :conn, [{conn(:lock) + 1, lock} | ops])

    pool_ref =
      pool_ref(pool: pool, reference: ref, deadline: deadline, holder: holder, lock: lock)

    checkout_result(holder, pool_ref, checkin_time)

  # ...snip...
end

The connection is now checked out and used by DBConnection to make the query. Once the query is complete, Holder.checkin/1 is called with the connection. The holder ETS table is updated and ownership is transferred back to the DBConnection.ConnectionPool process. If the process was busy (which indicates that all connections were checkout out) then the queue is checked for any waiting queries and the steps repeat.

Summary

Let’s summarize what we learned!

When start_link is called:

  • A DBConnection.ConnectionPool process is started. This is pretty much the only thing the user interacts with directly.
  • An ETS table is created by the ConnectionPool and used as queue for incoming requests, as well as tracking connections in the pool.
  • A DBConnection.ConnectionPool.Pool supervisor is started. This is dynamically added to the DBConnection’s supervision tree and DBConnection.Watcher links it to the ConnectionPool process.
  • One or more DBConnection.Connection processes are started as children of the Pool supervisor. Each one represents a separate network connection to the database.
  • Each Connection is referenced by an ETS table created by DBConnection.Holder. Ownership of these holder tables is passed to the ConnectionPool process.

 

And finally when a query is sent:

  • The first available connection is found by looking at the ETS queue for a holder reference. The calling process gets ownership of the holder ETS table and the reference to it is removed from the queue.
  • If no holders are found, the query is added to the ETS queue.
  • When a query finishes the holder reference is passed back to the ConnectionPool process. The next queued query is pulled and run, if there are any.
  1. postgrex start_link ↩

  2. db_connection :pool module ↩

  3. supervision tree ↩

  4. start pool process ↩

  5. pool process definition ↩

  6. connection start_link ↩

  7. pool_update/2 ↩

  8. ConnectionPool checkin ↩

  9. Postgrex query ↩

  10. Holder checkout_call ↩

The post Elixir’s DBConnection Pooling Deep Dive appeared first on ProdSens.live.

]]>
https://prodsens.live/2022/10/14/elixirs-dbconnection-pooling-deep-dive/feed/ 0
ElixirConf 2022 – That’s a wrap! https://prodsens.live/2022/09/13/elixirconf-2022-thats-a-wrap/?utm_source=rss&utm_medium=rss&utm_campaign=elixirconf-2022-thats-a-wrap https://prodsens.live/2022/09/13/elixirconf-2022-thats-a-wrap/#respond Tue, 13 Sep 2022 05:12:17 +0000 https://prodsens.live/2022/09/13/elixirconf-2022-thats-a-wrap/ elixirconf-2022-–-that’s-a-wrap!

If you’ve been following my sporadic twitter posts as I continue building my skills as a developer, you’ll…

The post ElixirConf 2022 – That’s a wrap! appeared first on ProdSens.live.

]]>
elixirconf-2022-–-that’s-a-wrap!

If you’ve been following my sporadic twitter posts as I continue building my skills as a developer, you’ll know that I was incredibly fortunate to be accepted as a speaker at this year’s ElixirConf in Aurora, Colorado. For the last week, I (along with fellow Alembians Josh Price and Zach Daniel) have had the privilege of hanging out in the United States with some of the most talented people I have ever met, all of whom are present and vocal members of the Elixir community.

If you know me personally, you’ll also know that this was a super special trip for me, as it was not only my first time speaking at a conference, but also, my first trip outside my home country of Australia! Despite lots of travel-related hijinks that left me a little worse for wear, I’d do it all again if it meant meeting so many amazing, talented, wonderful programmers.

It was an incredibly jam-packed three days for myself and my colleagues, in between running our Alembic sponsor booth, sharing everything going on with the Ash framework, presenting my talk to the community and soaking up the knowledge of many, many incredible speakers. The community is continually innovating in the Elixir ecosystem, as displayed by some of the presentations shared at the conference:

  • Brooklin Myers from Dockyard has been hard at work building the curriculum for Dockyard Academy, a bootcamp designed to train developers to fill the increasing number of Elixir developer roles. The entire twelve-week curriculum is available on GitHub for early review – and the whole thing is created by utilising LiveBook to create an interactive learning experience that makes devs get their hands dirty. As discussed in his talk, Brooklin has utilised smart cells to ensure a friendlier and more accessible user experience for students taking the online course, providing widgets that allow for easier navigation through course materials.
  • Chris Grainger is making strides in the machine learning side of the Elixir community with his company, Amplified. By utilising machine learning and training a model to find specific patterns in text, Chris has created a platform that allows for incredibly complex queries to be run on Amplified’s database of patents. By giving users incredibly granular control over queries to the database, Amplified allows for finding specific words and phrases in the text contained within patents to minimise the amount of manual reading needed when a patent applicant is researching potential conflicts.
  • Tyler Young took the floor to show the community how the team at Felt handled multiple people making edits simultaneously in their map editor. As Elixir’s ecosystem evolves, it will be fascinating to see more and more use cases for multiplayer editing, and how interfaces for such complex interactions will be handled.

These are just some of the highlights from the talks I was able to watch in-between discussions with fellow developers and hanging out at the Alembic booth giving out stickers. It was fantastic to see many speakers from different specialisations and at different levels of seniority and experience share their knowledge and learning with the rest of the community, and notably, it was fascinating to see the sheer variety of different use cases in which Elixir is being utilised by individuals and organisations alike.

Of course, no ElixirConf would be complete without its keynotes, and the core teams behind the ecosystem’s technologies and frameworks absolutely delivered.

Opening Keynote – José Valim

Apart from the updates announced for Elixir 1.14 released as part of the changelog, José spoke at length about the next steps for Elixir:

  • The core team is aiming to continue releasing updates every 6 to 9 months, focusing on smaller improvements and optimisations. He mentioned that this cycle allows the team enough breathing room to experiment with new ideas in preparation for larger updates to the language.
  • Three key areas of focus were pointed out for the future of Elixir – set-theoretic types (which he covered at length in his ElixirConf EU 2022 Keynote), developer and learning experience, and machine learning.
  • José is adamantly championing Livebook as a tool for learning the language and teaching aspiring alchemists. In particular, he spoke about the breadth of possibilities for visualising aspects of Livebook using Kino, such as charts, graphs, and mermaid diagrams. As mentioned previously, Dockyard Academy is taking this approach with their bootcamp curriculum, and is using this suite of visualisation tools alongside smart cells to lower the barrier to entry for new developers.
  • Machine learning is rapidly expanding within the Elixir ecosystem, with tools such as Nx, Axon, and Explorer being used both by individuals and companies such as Amplified, as mentioned above.

It’s fantastic to see the focus on developer and learning experience in particular, as more and more Elixir positions are opening and developers are joining the community.

Closing Keynote – Chris McCord

Despite some disruptive technical difficulties around laptops and screen sharing, Chris delivered some news that delighted users of Phoenix and LiveView across the board:

  • We’re getting a nifty new feature, Phoenix.VerifiedRoutes, as a replacement for the (currently very verbose) route helper methods. They’re a sigil-based string that, at compile time, is dispatched against the router and throws a warning if no match is found. This also works with nesting, turning this:

     resources "https://dev.to/posts", PostController do
            resources "https://dev.to/comments", CommentController
     end 
     > Routes.post_comment_path(@conn, :show, @post, @comment)  
    

    Into this:

    ~p"https://dev.to/posts/#{@post.id}/comments/#{@comment.id}"
    

    To paraphrase Chris, we now get to sprinkle strings everywhere, but in a way that “doesn’t suck”.

  • LiveView is getting declarative assigns and slots that help provide some quality of life improvements at compile time. Notably, we can now add docs in our declaration:

    attr :row_id, :any, default: nil, doc: "the function compute each tr id"
    attr :rest, :global, doc: "arbitrary HTML attrs to apply to the tbody"
    

    This means that [mix.docs](http://mix.docs) can now pull these into the documentation in a human-readable way.

  • Previously, we had to use the assigns_to_attributes function to allow users to pass in arbitrary assigns, like so:

    def icon(assigns) do
        assigns =
            assigns
            |> assign_new(:outlined, fn -> false end)
            |> assign_new(:class, fn -> "w-4 h-4 inline-block" end)
    
    ~H"""
    
    """
    

    But now, with the use of a :global attribute, we can write the same component like so:

    attr :name, :atom, required: true
    attr :outlined, :boolean, default: false
    attr :rest, :global, default: %{class: "w-4 h-4 inline-block"}
    
    def icon(assigns) do
        ~H"""
        
        """
    end
    
  • A HEEx formatter is here in the form of a mix formatter plugin, thanks to Felipe Renan. This is a huge quality of life improvement and people were cheering at this announcement!

  • The authentication system in Phoenix has had live generation added with phx.gen.auth -live, thanks to Berenice Medel from the Phoenix core team. By creating auth flows with LiveView, we can create richer experiences for users when logging in or signing up, with niceties such as real time form feedback creating a better user experience. Additionally, all of the newly generated code with this flag is fully tested.

  • Phoenix 1.7 will include Tailwind by default when creating a new app – no flag needed. Chris was careful to point out that this can be removed with minimal effort if devs don’t want to use it in their app, but also pointed out how Tailwind pairs incredibly well with Phoenix’s component system. Additionally, the new landing page generated with a new app is designed by the team at Tailwind, and includes resources to help users who are new to using Tailwind in their apps.

  • LiveView 0.18 is very accessibility-focused, and is releasing with a new function component, <.focus_wrap>. This is useful for maintaining focus events to stay within a modal when the user has one open, rather than tabbing through fields and ending up somewhere outside of the modal while it is still open. This is included with the new out of the box components being brought to Phoenix, so that a certain level of accessibility is baked into the code generated with phx.gen.auth.

Chris wrapped up by sharing the Roadmap of future features, including:

A great time to be using Elixir

Ultimately, this year’s ElixirConf showed just how much developers and companies are benefitting from the “Elixir advantage” – teams are building things faster, cheaper and with less people than before. Elixir’s use cases are growing even broader, especially with renewed focus on the machine learning tools and libraries within the ecosystem and the business value that they can bring. With core project teams working hard to make Elixir and its frameworks even more robust and delightful to use, the future of this language and its community is looking brighter than ever.

Want to enjoy the Elixir advantage on your next project? Get in touch with us at Alembic.

The post ElixirConf 2022 – That’s a wrap! appeared first on ProdSens.live.

]]>
https://prodsens.live/2022/09/13/elixirconf-2022-thats-a-wrap/feed/ 0
If you want to understand how Elixir Apps work, this is the way https://prodsens.live/2022/08/25/if-you-want-to-understand-how-elixir-apps-work-this-is-the-way/?utm_source=rss&utm_medium=rss&utm_campaign=if-you-want-to-understand-how-elixir-apps-work-this-is-the-way https://prodsens.live/2022/08/25/if-you-want-to-understand-how-elixir-apps-work-this-is-the-way/#respond Thu, 25 Aug 2022 22:11:43 +0000 https://prodsens.live/2022/08/25/if-you-want-to-understand-how-elixir-apps-work-this-is-the-way/ if-you-want-to-understand-how-elixir-apps-work,-this-is-the-way

Before you read this Article, I highly recommend you read the article about GenServer: https://dev.to/postelxpro/read-this-article-if-you-want-to-learn-genserver-1l24 Greeting Hello #devElixir!!!…

The post If you want to understand how Elixir Apps work, this is the way appeared first on ProdSens.live.

]]>
if-you-want-to-understand-how-elixir-apps-work,-this-is-the-way

Before you read this Article, I highly recommend you read the article about GenServer:
https://dev.to/postelxpro/read-this-article-if-you-want-to-learn-genserver-1l24

Greeting

Hello #devElixir!!! Welcome to #FullStackElxpro

Here, we discuss strategies and tips for your Elixir learning journey from zero to an expert in 3 months.

I am Gustavo, and today's theme is **Setup your local machine to use Elixir**.

ps: You can follow the Article with a VIDEO

Want to learn more about Elixir on a Telegram channel?

https://elxpro.com/subscribe-elxcrew

What is the difference between Supervisor and DynamicSupervisor?

Before understanding the difference between Supervisor and DynamicSupervisor. Let`s read and understand both.

Supervisor

https://elixir-lang.org/getting-started/mix-otp/supervisor-and-application.html

https://elixir-lang.org/getting-started/mix-otp/supervisor-and-application.html

A supervisor is a process that supervises other processes, which we refer to as child processes. Supervisors are used to building a hierarchical process structure called a supervision tree. Supervision trees provide fault tolerance and encapsulate how our applications start and shut down.

In my experience as a Developer in general, Supervisor is a way to keep the lifecycle of my Elixir Web Applications and a way to start and manage the essential dependencies you will have in your App. E.g., Ecto, Phoenix, Oban, Broadway.

Suppose you understand how Supervisor Works. It will be much easier to create dependencies of processes and how to start them quickly. Building Elixir applications and understanding core frameworks will be more straightforward in that case.

DynamicSupervisor

A DynamicSupervisor starts with no children. Instead, children are started on demand via start_child/2. When a dynamic supervisor terminates, all children are shut down at the same time, with no guarantee of ordering.

https://hexdocs.pm/elixir/1.13/DynamicSupervisor.html
https://elixir-lang.org/getting-started/mix-otp/dynamic-supervisor.html

DynamicSupervisor starts the process when is necessary, and you can see it easily when you have a Browser opened or a query execution using Ecto. I like creating associations during our typical day as an Elixir Developer because most of the time you will not use DynamycSupervisor/Supervisor. But they are there, you are using them, and you probably will need to debug or understand a piece of code in our core frameworks like ECTO/Phoenix or others. That is why understanding the difference between them and maybe in a day use them.

What’s worth more? Use Supervisor or DynamicSupervisor

I think you will use both but in a different context

Do you remember any stories where this was important to you?

Yes, I do. I remember when I knew only to use Phoenix and Ecto barely, and I had some problems with those frameworks and lifecycles. And also, I was encouraging myself to learn deeply about how Phoenix, Ecto, and Elixir work.

I had no idea why the application.ex was in my phoenix applications, no idea how the process and supervisors tree works, no idea how a phoenix app works. And I lost a lot of productivity in my day as an Elixir developer because I did not know why/how they worked and why they were there. When I decided to study how they worked and research both, my life as an Elixir developer changed. Elixir’s level of skill and productivity was raised 10x more because I only understood how they work.

Where do you see people get it wrong the most?

The main problem is you feel comfortable using only libs like Phoenix and Ecto for more than 10 months. Because if you don`t understand how your elixir application works, you will face problems scaling apps using dependencies like Oban, Broadway, or frameworks to use Cache, MongoDb, and even simple GenServer.

The best way in my experience is:

  • Am I okay to create standard Phoenix Apps?
  • Now I am going to study how Process works.
  • Do I know how to make a GenServer/Agents and Tasks? I am going to look at it.
  • Do I understand how Supervisor and DynamicSupervisor work? I am good at learning it.

Where to start?

Level 1

Start a standard Elixir app

❯ mix new practice                                       

and create a GenServer

defmodule StocksV1 do
  use GenServer

  def start_link(name: name, valuation: valuation) do
    GenServer.start_link(__MODULE__, valuation, name: name)
  end

  def init(state) do
    {:ok, state}
  end

  def handle_cast({:add, value}, state) do
    state = state + value
    {:noreply, state}
  end

  def add(stock, value) do
    GenServer.cast(stock, {:add, value})
  end
end

Lets Play around with our Genserver.

iex(4)> StocksV1.start_link name: :etsy, valuation: 10

iex(5)> :sys.get_state :etsy

iex(6)> GenServer.cast :etsy, {:add, 30} 

iex(7)> :sys.get_state :etsy

How about we want to start our stock within our App?

In this case, we can use Supervisor but how to start it?

It is simple, but we need to use the application because we are learning the whole process we need to use the application. And Application is essential for our Libs and Elixir application.

Read more: https://hexdocs.pm/elixir/1.13/Application.html

Applications are the idiomatic way to package software in Erlang/OTP. To get the idea, they are similar to the “library” concept common in other programming languages, but with some additional characteristics.

straightforward to use it, you only need to add in your mix.exs

  def application do
    [
      extra_applications: [:logger],
      mod: {Practice.Application, []}
    ]
  end

create application.ex

defmodule Practice.Application do
  use Application

  def start(_, _) do
    children = []
    Supervisor.start_link(children, strategy: :one_for_one, name: __MODULE__)
  end
end

Every time that you create an Application you must create an :erlang.link and in this case our link is our Supervisor. After starting the Supervisor you can only add your process to run:

defmodule Practice.Application do
  use Application

  def start(_, _) do
    children = [
      {StocksV1, name: :etsy, valuation: 10},
      # {StocksV1, name: :appl, valuation: 10}
    ]
    Supervisor.start_link(children, strategy: :one_for_one, name: __MODULE__)
  end
end

But you will notice that one of them is commented because you need IDS, and we can`t start it when we want but when we start the application, which makes us move to the next step. Start using DynamicSupervisors.

`
defmodule Practice.Application do
use Application

def start(_, _) do
children = [
{Registry, keys: :unique, name: Stocks},
{DynamicSupervisor, strategy: :one_for_one, name: Stocks.DynamicSupervisor}
]

Supervisor.start_link(children, strategy: :one_for_one, name: __MODULE__)

end
end
`

`
defmodule Stocks do
use GenServer

def start_link(name: name, valuation: valuation) do
name = stock_name(name)
GenServer.start_link(MODULE, valuation, name: name)
end

def stock_name(name) do
{:via, Registry, {MODULE, name}}
end

def init(state) do
{:ok, state}
end

def handle_cast({:add, value}, state) do
state = state + value
{:noreply, state}
end

def create_stock(name, valuation 20) do
DynamicSupervisor.start_child(
Stocks.DynamicSupervisor,
{Stocks, [name: name, valuation: valuation]}
)
end

def add(stock, value) do
stock = stock_name(stock)
GenServer.cast(stock, {:add, value})
end
end
`

When you call create_stock, different of a GenServer, you will create a child for your DynamicSupervisor and then. Create a process_id linking to your Parent’s process.

`
iex(1)> Stocks.create_stock “apple”, 100
{:ok, #PID<0.175.0>}

iex(2)> Stocks.create_stock “alphabet”, 300
{:ok, #PID<0.177.0>}

iex(3)> :sys.get_state pid(“0.175.0”)
100
`

Because we are using Registry, the way that we are creating processes is different. that is why we have the function stock_name, creating the unique ID to our process and then change the state.


iex(4)> Stocks.add "apple", 300
:ok
iex(5)> :sys.get_state pid("0.175.0")

Wrap up

If you followed this article, you will notice how simple is to use both of the services when you understand how the process works.

Social networks:

The post If you want to understand how Elixir Apps work, this is the way appeared first on ProdSens.live.

]]>
https://prodsens.live/2022/08/25/if-you-want-to-understand-how-elixir-apps-work-this-is-the-way/feed/ 0