Compute the n-th sequence in Fibonacci series.
In Math: f(n) = f(n-1) + f(n-2)
Applying the Math formula above, we will get:
defmodule Experiments do
def fib(0), do: 0
def fib(1), do: 1
def fib(n), do: fib(n-1) + fib(n-2)
end
While it’s very simple, and mathematically correct, this won’t work because it will do so much unnecessary computation.
So how do we do that efficiently? There is a better solution, with time complexity O(n).
defmodule Experiments do defp comp_fib(0), do: [0 | 0]
defp comp_fib(1), do: [1 | 0] defp comp_fib(n) do
[h | t] = comp_fib(n-1)
[h+t | h]…
In part 2 of Fast.ai Deep Learning Course, I learned that it’s important to not only be able to use Deep Learning library such as Tensorflow / PyTorch, but to really understand the idea and what’s actually happening behind it. And there’s no better way to understand it than to try and implement it ourselves.
In Machine Learning practice, convolution is something we’re all very familiar with. So I thought, why not give it a try? Here’s the result of my experiments in implementing Convolution in Julia.
When I took Andrew Ng’s course in Machine Learning, I mostly use MATLAB…
In the beginning of 2020, I decided to learn Rust. After almost 2 months of intensive learning, I finally finished “the book”, done rustlings course, and I think I grasp at least the basic principles of Rust.
For me, learning Rust is not just learning another language. It feels like learning a new way of programming. Rust boasts on empowering everyone to build efficient and reliable software.
What do I learn about being efficient from Rust?
What is zero-cost abstraction?
What you don’t use, you don’t pay for. And further: What you do use, you couldn’t hand code any better…
Blue/Green Deployment is an approach to have zero downtime deployment. This can be accomplished by creating two separate environments. One of them will host the current application (called “Blue”) and the other will host the application you will deploy (called “Green”).
This approach can greatly reduce risk especially in production environment. It allows you to serve the new application only when it’s ready, giving a very smooth transition for your users. It also allows you to rollback to the previous application if something goes wrong with the new application.
On ECS, these two separate environments are two Load Balancing Target…
Containers allow developers to build their code as one package of software that can be deployed virtually anywhere. Still, scaling a containerised app can take quite an amount of configuration. This configuration involves setting up Network, Load Balancer, Auto-Scaling policy, and other.
Serverless products such as AWS Lambda, Google Cloud Functions, or Azure Functions allow you to scale your app easily, but it limits your language preference, framework, and dependencies.
AWS Fargate is a service that allows you to run a container app without having to manage your cluster or server. …
According to statistics, PHP is still the most popular server-side programming language used by almost 80% of the internet. Among PHP frameworks, Laravel is one of the most popular and widely used. Do you use Laravel in your production environment?
Forge and Envoyer can make it very easy for you to deploy them to one or more servers but it might get difficult when you need to scale automatically. By scaling, I mean scale-out when needed, and scale-in when not needed.
Note: While this tutorial is designed to deploy Laravel app in mind, you can use it for other containerised…
Tech Lead, Cloud Architect, Machine Learning Practicioner