Insider Tip: Supercharge Your Phoenix App with `:ets` Caching
Shhh... not everyone knows this one.
If you're running a high-traffic Phoenix app and notice bottlenecks around frequently accessed data (like config settings, permissions, or feature flags), there's a dead-simple way to massively cut down latency—without reaching for Redis or another external cache layer.
Here’s the move: use Erlang’s built-in :ets
(Erlang Term Storage) for in-memory reads at lightning speed. Think microsecond access times.
⚡ Why this works
:ets
is managed by the BEAM, lives in memory, and supports concurrent read access. It's ideal for lookups that rarely change but are read constantly.
🔧 Example: Caching user roles
Let’s say you’re hitting the database every time you check a user’s role. Here’s a sneakier way:
defmodule MyApp.RoleCache do @table :role_cache def init do :ets.new(@table, [:named_table, :set, :public, read_concurrency: true]) end def put(user_id, role) do :ets.insert(@table, {user_id, role}) end def get(user_id) do case :ets.lookup(@table, user_id) do [{^user_id, role}] -> {:ok, role} [] -> :miss end end end
Drop this into an app startup hook (e.g., MyApp.Application.start/2
) to initialize the table.
🚀 Pro tip: Layer it with fallback logic
def get_or_fetch(user_id) do case get(user_id) do {:ok, role} -> role :miss -> role = MyApp.Repo.get_role_from_db(user_id) put(user_id, role) role end end
🧠 Remember
- Keep data small and fast-changing stuff out of it.
:ets
tables don’t persist across node restarts—use it strategically.- For true distributed caching, you'll want to look into
:global
,:mnesia
, or external layers. But for local, hot-path reads? This is chef’s kiss.
You didn’t hear it from me. But this one tweak? Could be your secret edge. 🕶️