NEXTSCRIBE

Posts tagged with "web-development"

Insider Tip: Supercharge Your Phoenix App with `:ets` Caching

Shhh... not everyone knows this one.

If you're running a high-traffic Phoenix app and notice bottlenecks around frequently accessed data (like config settings, permissions, or feature flags), there's a dead-simple way to massively cut down latency—without reaching for Redis or another external cache layer.

Here’s the move: use Erlang’s built-in :ets (Erlang Term Storage) for in-memory reads at lightning speed. Think microsecond access times.

⚡ Why this works

:ets is managed by the BEAM, lives in memory, and supports concurrent read access. It's ideal for lookups that rarely change but are read constantly.

🔧 Example: Caching user roles

Let’s say you’re hitting the database every time you check a user’s role. Here’s a sneakier way:

defmodule MyApp.RoleCache do @table :role_cache def init do :ets.new(@table, [:named_table, :set, :public, read_concurrency: true]) end def put(user_id, role) do :ets.insert(@table, {user_id, role}) end def get(user_id) do case :ets.lookup(@table, user_id) do [{^user_id, role}] -> {:ok, role} [] -> :miss end end end

Drop this into an app startup hook (e.g., MyApp.Application.start/2) to initialize the table.

🚀 Pro tip: Layer it with fallback logic

def get_or_fetch(user_id) do case get(user_id) do {:ok, role} -> role :miss -> role = MyApp.Repo.get_role_from_db(user_id) put(user_id, role) role end end

🧠 Remember

  • Keep data small and fast-changing stuff out of it.
  • :ets tables don’t persist across node restarts—use it strategically.
  • For true distributed caching, you'll want to look into :global, :mnesia, or external layers. But for local, hot-path reads? This is chef’s kiss.

You didn’t hear it from me. But this one tweak? Could be your secret edge. 🕶️

Keep Your GoFiber Handlers Clean with Middleware Validation

If you’ve been building APIs with GoFiber, you’ve probably had that moment where your route handler starts to get a little... messy. Especially when you’re doing request validation right inside the handler.

Let’s talk about a simple tip to clean that up using middleware — it’s easy, reusable, and your future self will thank you.

😬 The Messy Way (We've All Done It)

Here’s what a typical handler might look like at first:

app.Post("/users", func(c *fiber.Ctx) error { type UserRequest struct { Name string `json:"name"` Email string `json:"email"` } var body UserRequest if err := c.BodyParser(&body); err != nil { return c.Status(fiber.StatusBadRequest).JSON(fiber.Map{ "error": "Invalid request body", }) } if body.Name == "" || body.Email == "" { return c.Status(fiber.StatusBadRequest).JSON(fiber.Map{ "error": "Name and Email are required", }) } // Do the thing return c.JSON(fiber.Map{"message": "User created"}) })

This works fine at first, but as your API grows, stuffing all the parsing and validation into your handlers gets old real quick.

✨ The Cleaner Way: Middleware

Let’s move the validation logic out of the handler and into a middleware. That way, your handler can focus on doing what it’s actually meant to do.

🧱 Step 1: Define Your Request Struct

type UserRequest struct { Name string `json:"name"` Email string `json:"email"` }

🧼 Step 2: Write a Middleware to Validate It

func ValidateUserRequest(c *fiber.Ctx) error { var body UserRequest if err := c.BodyParser(&body); err != nil { return c.Status(fiber.StatusBadRequest).JSON(fiber.Map{ "error": "Invalid JSON body", }) } if body.Name == "" || body.Email == "" { return c.Status(fiber.StatusBadRequest).JSON(fiber.Map{ "error": "Name and Email are required", }) } // Pass the validated body to the next handler c.Locals("userBody", body) return c.Next() }

🧑‍🍳 Step 3: Use It in Your Route

app.Post("/users", ValidateUserRequest, func(c *fiber.Ctx) error { body := c.Locals("userBody").(UserRequest) // Clean and easy return c.JSON(fiber.Map{ "message": "User created", "user": body, }) })

Nice and tidy. You’re only dealing with the stuff you care about inside the handler.

🏆 Why This Rocks

  • Separation of concerns – Validation and logic are in their own lanes.
  • Reusability – You can reuse the same middleware for other routes.
  • Testability – Easier to test your logic without worrying about request parsing every time.

💪 Bonus: Use a Validation Library

Want to step it up a notch? Add go-playground/validator for fancy rules like email format, string length, etc.

go get github.com/go-playground/validator/v10

Then tweak your middleware:

import "github.com/go-playground/validator/v10" var validate = validator.New() type UserRequest struct { Name string `json:"name" validate:"required"` Email string `json:"email" validate:"required,email"` } func ValidateUserRequest(c *fiber.Ctx) error { var body UserRequest if err := c.BodyParser(&body); err != nil { return c.Status(fiber.StatusBadRequest).JSON(fiber.Map{"error": "Invalid body"}) } if err := validate.Struct(body); err != nil { return c.Status(fiber.StatusBadRequest).JSON(fiber.Map{"error": err.Error()}) } c.Locals("userBody", body) return c.Next() }

Now your validation is way more powerful, but still just as clean.

👋 Final Thoughts

Keeping your GoFiber handlers clean doesn’t have to be complicated. With a little middleware magic, you can offload the grunt work like validation and keep your handlers laser-focused.

Try it out on your next API project. Your code (and teammates) will be happier for it.

Next.js for Fullstack Development: Pros & Cons

Next.js has become one of the go-to frameworks for fullstack web development. But is it the right choice for your project? Let's break it down in simple terms.

Pros ✅

1. Server-Side Rendering (SSR) & Static Generation (SSG)

Next.js lets you render pages on the server (SSR) or ahead of time (SSG), making your app faster and more SEO-friendly.

2. Fullstack Capabilities

With API routes, you can build both frontend and backend in the same project—no need for a separate backend service.

3. Automatic Code Splitting

Next.js automatically splits your code, so users only download what's needed for the page they're viewing. This improves performance.

4. App Router & Server Components

Next.js now uses the App Router (app/ directory) by default, leveraging React Server Components for better performance and flexibility in data fetching.

5. Great Developer Experience

Fast refresh, TypeScript support, and a huge ecosystem make developing with Next.js a breeze.

6. Easy Deployment with Vercel

Since Next.js is built by Vercel, deploying your app is as simple as pushing to GitHub and letting Vercel handle the rest.

Cons ❌

1. Learning Curve

If you're coming from vanilla React, the concepts of SSR, SSG, ISR (Incremental Static Regeneration), and Server Components might take some time to grasp.

2. Server Costs for SSR

SSR requires a server to generate pages dynamically, which can increase hosting costs compared to purely static sites.

3. Opinionated Structure

Next.js has its own way of doing things, especially around routing and data fetching. If you want full control, you might feel restricted.

4. Complex API Routes

While API routes are great for small projects, they might not scale well for larger applications. You might need a dedicated backend eventually.

5. Client-Side Navigation Quirks

Sometimes, using next/link and next/router for navigation can be tricky, especially with deep linking and query parameters.

Final Thoughts 💭

Next.js is a powerhouse for fullstack development, but it's not perfect. If you want a balance between performance, SEO, and developer experience, it's a great choice. However, if you need full backend flexibility, consider pairing it with a dedicated backend framework.

Would you use Next.js for your next project?

Rails 8 Tip: Leveraging Parallel Testing by Default

One of the most significant improvements in Rails 8 is the enhanced parallel testing system that now comes enabled by default. This feature dramatically reduces test suite execution time with minimal configuration.

What Changed in Rails 8

In previous Rails versions, you had to explicitly opt into parallel testing by adding configuration to your test environment. Rails 8 flips this approach - parallel testing is now enabled out of the box, automatically detecting your system's CPU count and utilizing those resources efficiently.

How to Get the Most Out of It

Here's how to leverage this feature effectively:

# config/environments/test.rb Rails.application.configure do # The default is now true, but you can specify the number of workers config.parallel_testing.workers = :number_of_processors # or set specific number # You can also configure specific worker counts for different test types config.parallel_testing.workers = { models: 4, controllers: 3, system: 2 } end

Managing Database Setup

With parallel testing, each worker needs its own database. Rails 8 automatically handles this with a new task:

bin/rails parallel:setup

This creates numbered databases for each worker (like myapp_test-1, myapp_test-2).

Dealing with Shared Resources

When tests run in parallel, be careful with shared resources. Rails 8 includes helpers to manage this:

class SomeTest < ActiveSupport::TestCase parallelize_with :processes do |worker| # Code that runs once per worker Setup.prepare_shared_resource_for_worker(worker) end test "something that uses isolated resources" do # This test can run in parallel end end

When to Turn It Off

Some tests may not work well in parallel. For specific test files:

class ComplexSystemTest < ApplicationSystemTestCase # Disable parallelization for just this test class self.use_parallel = false test "a complex flow that shouldn't run in parallel" do # Test code here end end

Enjoy the significantly faster test execution times with Rails 8's parallel testing defaults!