<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Aniket's Blog]]></title><description><![CDATA[Aniket's Blog]]></description><link>https://blog.anikety.com</link><generator>RSS for Node</generator><lastBuildDate>Fri, 17 Apr 2026 10:56:28 GMT</lastBuildDate><atom:link href="https://blog.anikety.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[A Distributed Task Scheduler for 1M+ Daily Tasks]]></title><description><![CDATA[Introduction
This article is about how I upgraded my previous cron scheduler service that I built in Go for scaling it to around 1 million daily scheduled jobs. This lays out the overall system design, approach and considerations I took in building t...]]></description><link>https://blog.anikety.com/distributed-task-scheduler-for-1m-daily-tasks</link><guid isPermaLink="true">https://blog.anikety.com/distributed-task-scheduler-for-1m-daily-tasks</guid><category><![CDATA[golang]]></category><category><![CDATA[Golang developer]]></category><category><![CDATA[System Design]]></category><category><![CDATA[System Architecture]]></category><category><![CDATA[system]]></category><category><![CDATA[Blogging]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[Programming Tips]]></category><category><![CDATA[Golang web development]]></category><category><![CDATA[gin-gonic]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[Redis]]></category><category><![CDATA[redis cluster]]></category><dc:creator><![CDATA[Aniket Yadav]]></dc:creator><pubDate>Thu, 10 Jul 2025 00:20:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1753316532320/f8328940-d39b-4ba4-94ab-db7db28fc939.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>This article is about how I upgraded my previous <a target="_blank" href="https://github.com/Aniketyadav44/cronflow">cron scheduler service</a> that I built in Go for scaling it to around 1 million daily scheduled jobs. This lays out the overall system design, approach and considerations I took in building this system.</p>
<h2 id="heading-the-problem">The Problem</h2>
<p>I had created a <a target="_blank" href="https://github.com/Aniketyadav44/cronflow">cron scheduler service</a> which uses <code>robfig/cron</code> package to schedule cron jobs on the machine. This is good for limited number of jobs, but when the job numbers are increased to thousands and even millions, this system will not work.</p>
<p>As for scheduling the crons, the package uses goroutines and for this huge amount of jobs, it can easily overwhelm the hardware resource. Now there can be a solution to run multiple scheduler instances and divide the overall crons on these instances. But this brings another issue. When crons are scheduled on multiple machines, what if an instance suddenly crashes? The jobs scheduled on that machine will be lost!. We will then have to build some tracking mechanism to track each cron respective to the machine, which will bring further complexities. So have the job’s state on a server instance would not work.</p>
<p>So in this article, we discuss how to make a system that can have million jobs scheduled daily!</p>
<h2 id="heading-the-solution">The Solution</h2>
<p>I have referred multiple system design blogs &amp; videos(<a target="_blank" href="https://medium.com/@mayilb77/design-a-distributed-job-scheduler-for-millions-of-tasks-in-daily-operations-4132dc6d645f">Mayil’s Medium</a>, <a target="_blank" href="https://blog.algomaster.io/p/design-a-distributed-job-scheduler">Algomaster</a>, <a target="_blank" href="https://youtu.be/pzDwYHRzEnk?si=A3sq0lxIkEbIgDRY">Jordan’s YT</a>) to design a distributed scheduler service, and came up with following points:</p>
<ul>
<li><p>A job submission service(UI/API) will create new jobs and store them in a job store.</p>
</li>
<li><p>The job store will persist all of the job related data and the run history of the jobs with their status.</p>
</li>
<li><p>A scheduling service periodically queries the jobs table to get all tasks to run at current time.</p>
</li>
<li><p>These jobs will be then pushed to a distributed queue.</p>
</li>
<li><p>These jobs then will be consumed &amp; processed by workers, which will update their job run status in job run history store.</p>
</li>
</ul>
<p>The idea is that, instead of my approach of scheduling jobs on a machine, we can just poll the jobs table every minute to check all jobs that are required to run at that minute.</p>
<p>This is the big change! Instead of actually scheduling hardware cron jobs, we can just fetch from the job store every minute and process them.</p>
<p>After polling and fetching required jobs from DB, we can just publish them to the queue.</p>
<p>For this, the blogs have suggested using the NoSQL Cassandra DB due to it’s low latency and high read-write throughputs that fits well when we have to periodically poll on the DB.</p>
<p>For queueing jobs, we can use Kafka.</p>
<h2 id="heading-my-approach-amp-considerations">My Approach &amp; Considerations</h2>
<p>To keep our infra simple and considering limited resources, we will use:</p>
<ul>
<li><p>PostgreSQL DB for storing jobs data and job run history.</p>
</li>
<li><p>Redis for queuing jobs in redis streams, managing distributed locks and storing job in sorted sets.</p>
</li>
</ul>
<p>Since we are using PostgreSQL DB, we should consider the limitations of polling the DB every minute. Polling periodically can put heavy load on the DB and can increase the I/O ops thus affecting our single point of source for other operations.</p>
<p>To reduce load on DB, we can process jobs in batches. In this implementation, we will query the DB every 10 minutes to load all of the jobs for next 10 minutes temporarily. After 10 minutes, they will be removed from this temporary storage.</p>
<p>Now next question is where should we store the next 10 minute jobs efficiently?</p>
<p>Answer is Redis’s sorted sets!</p>
<h3 id="heading-redis-zset"><code>Redis ZSET</code></h3>
<p>First i had considered Redis lists, but going through the list for specific timestamp can take O(N) time. Which can be huge, considering a large number of jobs for a minute(e.g thousands of jobs per minute at spike).</p>
<p>Using Redis sorted set brings the advantage of scores! In sorted sets, the values are by default ordered based on scores. So we can use the timestamps as scores and this can bring faster access to the jobs when we try to fetch for specific timestamp.</p>
<p>For reading, it will take O(log N + M), where N is the total number of elements in set and M is the total elements returned. So internally, Redis first searches for the first element which satisfies range condition i.e &gt;=min which takes O(log N) time like binary search. And once the node position is found, it takes O(M) time to linearly go through the M items and return them until the range is &lt;= max.</p>
<h3 id="heading-redis-stream-amp-consumer-group"><code>Redis Stream &amp; Consumer Group</code></h3>
<p>For queuing jobs, we will use Redis streams. The workers will be a part of consumer group listening over this stream. Using consumer groups solves the problem of duplicate job execution, since the consumer group itself acts as a load balancer for multiple consumers in the group and distributes the job coming from the stream properly ensuring a job will be delivered to only one consumer.</p>
<p>Redis streams also allows to acknowledge the job messages using XACK, using which we can build our acknowledge and retry mechanism.</p>
<p>When a consumer gets a jobs and suddenly crashes without acknowledging, Redis allows us to claim back that job from a pending entries list. So any other consumer can claim, process and ack that job if the job has been there for too long in the pending list.</p>
<h2 id="heading-architecture">Architecture</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751952394078/bada4f0f-8d34-4320-9439-3fa27c25b5d9.jpeg" alt class="image--center mx-auto" /></p>
<p>This system consists of following components:</p>
<ol>
<li><p><code>Dashboard</code>: The dashboard lets users create a new job, list all jobs and view job run entries with their status (completed, failed, permanently_failed). The dashboard interface services are served with a load balancer. On creation of a new job, an entry is made in the <code>jobs</code> table.</p>
</li>
<li><p><code>Scheduler</code>: The scheduler service has two sub-service - Batch processor and Publisher.<br /> The Batch Processor polls the PostgreSQL DB every 10 mins with a Redis lock to query jobs for next 10 min range. After getting the jobs, it adds them to Redis’s sorted set (ZSET).<br /> The Publisher runs every minute to fetch all jobs from ZSET for current time. It then publishes these jobs to Redis stream.</p>
</li>
<li><p><code>Worker</code>: The worker services are part of a consumer group which gets jobs from the Redis stream and process them further. On success, it makes entry in the <code>job_runs</code> table as completed. On failure, it puts back the jobs to the stream for further retrying. On retrying for max 3 times, it marks the job as permanently failed and creates entry in the <code>job_runs</code> table.</p>
</li>
<li><p><code>PostgreSQL DB</code>: PostgreSQL is used to store the job details in <code>jobs</code> table which is partitioned using the hour time of jobs. e.g a partition for storing jobs whose scheduled hour is 00 to 06, another for 07 to 12, etc. It stores the hour, minute, payload and retry count.<br /> It has another table <code>job_runs</code> to store the job run entries with their statuses (completed, failed, permanently_failed). It also stores the output (for completed runs) and error (for failed runs).</p>
</li>
<li><p><code>Redis</code>: Redis is used for letting the Batch Processor of scheduler service to create a lock so that only one instance polls the DB.<br /> Redis’s sorted set is used to store jobs with their timestamp as scores for fast access during every minute polling of the Publisher of scheduler service.<br /> Redis Stream is used to queue the jobs.</p>
</li>
</ol>
<p>As this is a time sensitive system, storing and processing of jobs are done at their UTC times across all services and DB.</p>
<h3 id="heading-dashboard">Dashboard</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751952449455/5afe42ee-d25b-42d0-89a9-af0d65603caf.jpeg" alt class="image--center mx-auto" /></p>
<p>The dashboard lets users create a new job, list all of the jobs and view run entries of a job.</p>
<ol>
<li><p><code>Creating &amp; Viewing jobs</code>: When a job is created from dashboard interface, the timezone of the client is also passed with the request. The schedule time is then converted to UTC and the job is inserted into <code>jobs</code> table. The <code>jobs</code> table has <code>id</code>, <code>hour</code> UTC converted, <code>minute</code> UTC converted, <code>type</code> (ping, email, slack, webhook), <code>payload</code>, <code>retries</code> count, <code>created_at</code> and <code>updated_at</code> times.<br /> The <code>jobs</code> table is partitioned using the <code>hour</code> column in different hour range partitions. e.g <code>jobs_00_to_06</code> for storing jobs whose UTC hour ranges between 00 and 06, <code>jobs_07_to_12</code> for storing jobs whose UTC hour ranges between 07 and 12, etc.</p>
</li>
<li><p><code>Viewing job run entries</code>: Job run entries are viewed using the <code>job_runs</code> table which stores the <code>id</code>, <code>job_id</code>, <code>status</code> (completed, failed, permanently_failed), <code>output</code> for success runs, <code>error</code> for failed runs, <code>scheduled_at</code> and <code>completed_at</code> time.</p>
</li>
</ol>
<h3 id="heading-scheduler-service">Scheduler service</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751952468043/7f81a1cf-f20c-4b33-8f9f-c0613531ae57.jpeg" alt class="image--center mx-auto" /></p>
<p>The Scheduler service has two main components: <code>Batch Processor</code> and <code>Publisher</code>:</p>
<p><code>Batch Processor</code>:</p>
<ol>
<li><p>It polls the PostgreSQL DB every 10 mins to read jobs from the <code>jobs</code> table.</p>
</li>
<li><p>For multiple scheduler service instances, it first tries to create a Redis lock so that only one scheduler instance polls the DB.</p>
</li>
<li><p>If the instance is unable to get the lock, it will exit and wait for next timer period.<br /> The batch processor fetches jobs of next 10 mins at every 10s rounded time i.e 12:00, 12:10, 12:20,… it fetches jobs of next 10 min range i.e at 12:10, it fetches jobs for 12:11 to 12:20 range, at 12:20, it fetches jobs for 12:21 to 12:30, etc.</p>
</li>
<li><p>On getting the Redis lock, jobs are queried. It pushes these jobs to the Redis’s sorted set (ZSET).</p>
</li>
<li><p>After pushing the jobs, it also clears all previous jobs if they were not processed in past 10 mins from ZSET. These lost jobs can be logged as lost/failed.</p>
</li>
</ol>
<p>Here, ZSET is used because of its score mechanism. The jobs are stored in a sorted manner using their timestamps as score(sorting key). So, the insertion becomes O(log N) and fetching is O(log N + M).<br />In case of errors, the locked batch processor can run for maximum 3 times for retrying.</p>
<p><code>Publisher</code>:</p>
<ol>
<li><p>It runs for every minute to get jobs from the ZSET for that specific time.</p>
</li>
<li><p>All jobs of current time are fetched in batches via all of the scheduler instances until there are no jobs available in ZSET for that current time.</p>
</li>
<li><p>The jobs are fetched and deleted at same time using a single lua script which runs on the redis server end to avoid same jobs getting fetched from different instances.</p>
</li>
<li><p>Then it publishes all of the jobs to Redis stream using <code>XAdd</code>.</p>
</li>
</ol>
<h3 id="heading-worker-service">Worker service</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751952495183/cc3c248a-2e7a-4e32-b60c-17541d52d6fc.jpeg" alt class="image--center mx-auto" /></p>
<p>The worker services are part of a consumer group listening over this stream. Using consumer groups, we can solve the problem of duplicate job execution, since the consumer group ensures that a single job message is delivered to only one consumer.</p>
<ol>
<li><p>When a worker is started, it first check for the consumer group, and if it is not present, it creates it.</p>
</li>
<li><p>Then we get the job from the stream in a batch of N using <code>XREADGROUP</code>. On every read, we wait for 5 seconds until there are N jobs in the stream. As soon as N jobs becomes available in the stream it fetches them. After 5 seconds, if there are less than N jobs, it still fetches them.</p>
</li>
<li><p>These jobs are then processed based on their type which is present in their payload. For <code>ping</code> type, we make a GET call, for <code>email</code> type, we try to send an email using <code>mailtrap.io</code> service, for <code>slack</code> type, we make a POST call to given slack url and for <code>webhook</code> type, we make the POST call from given url and body.</p>
</li>
<li><p>If the job execution was success, we make an entry in the <code>job_runs</code> table with <code>completed</code> status and reset the <code>retries</code> count of that job to 0 in <code>jobs</code> table. Then that job message is acknowledged using <code>XAck</code> and deleted from the stream using <code>XDel</code>.</p>
</li>
<li><p>If the job execution fails, we first get the <code>retries</code> count of that job from <code>jobs</code> table. If the <code>retries</code> count is less than 3, we make an entry in <code>job_runs</code> table with <code>failed</code> status and increment the <code>retries</code> count of that job in <code>jobs</code> table. Then the job message is acknowledged and deleted from stream. Then a new entry of that job is published to the stream for further retries using <code>XAdd</code>.</p>
</li>
<li><p>If on failed job execution, we get the <code>retries</code> count &gt;= 3, we make an entry in the <code>job_runs</code> table with <code>permanently_failed</code> status and reset the <code>retries</code> count to 0 for that job in <code>jobs</code> table. Then the job message is acknowledged and deleted from stream.</p>
</li>
</ol>
<p>The worker can get very slow if there is a single process running on the worker machine. So we can make use of workerpools here. e.g running 10 goroutines in the workerpool to keep processing the jobs coming in redis stream concurrently.</p>
<h2 id="heading-some-calculations-testing-amp-observations">Some Calculations, Testing &amp; Observations</h2>
<p>Since we are now targeting around a million scheduled jobs daily, so average tasks per minute would be</p>
<p>(1,000,000)/24×60 ≈ 695 jobs/minute.</p>
<p>i.e 6950 jobs on average in our 10 mins of batch processing time. But this is uniform distribution of tasks and in real system, the tasks would not be evenly distributed across time.</p>
<p>Let’s consider a single busiest 10 min range can have around 75K jobs burst where some mins can have spike of around 15K also.</p>
<p>So with these numbers, I tested the system deployed on ECS + Fargate:</p>
<p>I generated a python script using GPT - <code>generate_insert.py</code> which gives me batch insertion query to create a load in the DB for testing. It takes the hour, start minute, end minute, a list of heavy minutes which will have N jobs and N - the spike count that the heavy minutes will have. It is well customizable.</p>
<p>Configurations:</p>
<ul>
<li><p>1 dashboard instance</p>
</li>
<li><p>2 scheduler instances</p>
</li>
<li><p>4 worker instances</p>
</li>
<li><p>Fragate specs: .25 vCPU and .5 GB Memory.</p>
</li>
</ul>
<p>Observations:</p>
<ul>
<li><p>The batch processing of jobs from DB to ZSET and publishing of jobs from ZSET every minute to the stream was very quick. all happened within milliseconds.</p>
</li>
<li><p>On worker instances, the consuming of jobs was really quick at the start i.e around 1000 - 1500 jobs per second. But as it continued, due to requeuing of failed jobs(whose execution had 5s timeout) and waiting for 5 seconds until it’s failure and with overall 3 max retries for a failed job, the job execution time on worker machines exponentially grew.</p>
</li>
</ul>
<p>The overall processing of this 10 min batch took around 13 mins where 10 goroutines were working in the workerpool on each worker machine, considering there were sudden spikes of 15k to 20k for some mins.</p>
<p>Without workerpools, it took more than 30 mins. So yes, increasing the hardware resource of a worker machine and adding more goroutines can definitely reduce the overall job execution time. Or else, we can also just horizontally scale the worker machines.</p>
<h2 id="heading-improvements">Improvements</h2>
<h3 id="heading-batch-processing-failures">Batch Processing failures</h3>
<p>Considering a case when a scheduler instance fails after getting the lock, the batch of that 10 min range will never be processed.</p>
<p>We have set the lock expiration time for the batch to 2 mins. So to handle this, on completion of a batch processing, we can put a key in redis to mark the completion of that batch (e.g batch_completed_11_20 - i.e for 12:11 to 12:20) with expiration of 10 mins.</p>
<p>So In the batch processor, we can periodically check for every 2 mins if the batch of that 10 min range was processed or not.</p>
<h3 id="heading-worker-instance-crashes">Worker instance crashes</h3>
<p>When a job is claimed by a consumer in the consumer group, it is also registered in a Pending Entries List (PEL) to keep track what jobs are not yet acknowledged using <code>Xack</code>. So even if the consumer crashes or is gone from the group, those jobs remain in the PEL.</p>
<p>For handling this, we can run a periodic goroutine to get jobs from PEL whose time has crossed a certain threshold time, then claim them using <code>XCLAIM</code>, process them, ack and remove them from the PEL.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>So with this approach, we can definitely process thousands of jobs every minute by making the batch processor pick jobs for every 10 mins and publishing them to the redis stream every minute using ZSET’s fast insertion and retrieval.<br />It is really amazing to design and build a system that can handle such a huge workload. Still this is not fully reliable and fault tolerant, but additions can be built on this to make it one!</p>
<p>Find this project here:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/Aniketyadav44/dscheduler">https://github.com/Aniketyadav44/dscheduler</a></div>
<p> </p>
<hr />
<p>If you find this article helpful, don't forget to hit the ❤️ button.</p>
<p>Check out my website <a target="_blank" href="https://anikety.com/"><strong>here</strong></a> and feel free to connect.</p>
<p>Happy Coding! 👨‍💻</p>
]]></content:encoded></item><item><title><![CDATA[Building a Cron Scheduler with RabbitMQ in Go]]></title><description><![CDATA[Introduction
This project is about a cron scheduler service which schedules cron jobs that are created using a dashboard interface. It publishes the jobs to the consumer service through RabbitMQ job queue, where it acknowledges successful job executi...]]></description><link>https://blog.anikety.com/building-a-cron-scheduler-with-rabbitmq-in-go</link><guid isPermaLink="true">https://blog.anikety.com/building-a-cron-scheduler-with-rabbitmq-in-go</guid><category><![CDATA[Go Language]]></category><category><![CDATA[Go]]></category><category><![CDATA[golang]]></category><category><![CDATA[Golang web development]]></category><category><![CDATA[rabbitmq]]></category><category><![CDATA[RabbitMQ Tutorial]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[Databases]]></category><category><![CDATA[message queue]]></category><category><![CDATA[message broker]]></category><category><![CDATA[System Design]]></category><category><![CDATA[System Architecture]]></category><dc:creator><![CDATA[Aniket Yadav]]></dc:creator><pubDate>Sat, 28 Jun 2025 10:27:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751091630596/00af2389-61ce-4dad-9f9b-902179bc0a81.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>This project is about a cron scheduler service which schedules cron jobs that are created using a dashboard interface. It publishes the jobs to the consumer service through RabbitMQ job queue, where it acknowledges successful job execution and retries for upto 3 times on job failures. And thus updates the job status in PostgreSQL database for tracking through the dashboard interface.</p>
<h2 id="heading-some-theory">Some Theory</h2>
<ol>
<li><p><code>Cron jobs</code>: These are jobs/tasks which are scheduled to run at a specific time. They can be configured to run only at a given specific time, repeat every day at a specific time, repeat every x hours, etc by using <code>cron expressions</code>.</p>
</li>
<li><p><a target="_blank" href="https://www.rabbitmq.com/"><code>RabbitMQ</code></a>: It is an open-source <code>message broker</code> that allows two services to communicate asynchronously. It provides a queue through which two separate services can communicate by sending messages. This is one of the best communication methods used by distributed systems. We will be using the AMQP (Advanced Message Queuing Protocol) in this project supported by RabbitMQ.</p>
</li>
<li><p><a target="_blank" href="https://www.rabbitmq.com/tutorials/amqp-concepts#amqp-model"><code>AMQP</code></a>: The Advanced Message Queuing Protocol is an open standard protocol where <code>messages</code> (data) are published through <code>exchanges</code> to <code>queues</code>. The queues are connected with exchanges using <code>bindings</code> which define the exchanges to which queue the data should be sent. Then other service can consume the messages over the queue</p>
</li>
</ol>
<h2 id="heading-project-architecture">Project Architecture</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751095740803/e45f43c2-09df-405c-8eff-d0796f8608b6.jpeg" alt class="image--center mx-auto" /></p>
<p>This project will have two main components:</p>
<ol>
<li><p><code>scheduler</code><br /> This will host our dashboard and schedule the cron jobs. On cron timings, the jobs will be published to the queue.</p>
</li>
<li><p><code>consumer</code><br /> This will consume messages from the queue and process them based on the job type. On success, it will register in PostgreSQL DB. On failure, it will try for up to maximum 3 times. If job fails for 3 retries, it will register as permanently_failed in the DB. This will also <code>ACK</code> (acknowledge) or <code>NACK</code> (un-acknowledge and put back to queue) the jobs depending on the results.</p>
</li>
</ol>
<p>Find this project here:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/Aniketyadav44/cronflow">https://github.com/Aniketyadav44/cronflow</a></div>
<p> </p>
<blockquote>
<p>Note:</p>
<p>The dashboard in this project has pages to display basic stats, listing all of the scheduled jobs and listing job run entries with status (running/completed/failed) which are not explained in this blog.</p>
<p>The main purpose of this blog is to only explain the code for core logic of cron job scheduling, publishing to and consuming from RabbitMQ, processing the job based on type (ping, email, slack, webhook) with acknowledgement &amp; retries and registering everything on the PostgreSQL database.</p>
<p>To understand the project structure used here, please refer to this blog: <a target="_blank" href="https://blog.anikety.com/go-backend-project-structure">Effective Project Structure for Backend Projects in Go</a> written by me!</p>
</blockquote>
<h3 id="heading-scheduling-cron-job-amp-publishing-to-rabbitmq">Scheduling cron job &amp; Publishing to RabbitMQ</h3>
<p>In scheduler service’s <code>apiService.go</code> file, which is located at <a target="_blank" href="https://github.com/Aniketyadav44/cronflow/blob/main/scheduler/internal/services/apiService.go"><code>/scheduler/internal/services/apiService.go</code></a></p>
<pre><code class="lang-go"><span class="hljs-keyword">type</span> ApiService <span class="hljs-keyword">struct</span> {
    db        *sql.DB
    cron      *cron.Cron
    rabbitmq  *amqp091.Connection
    mqChannel *amqp091.Channel
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(s *ApiService)</span> <span class="hljs-title">CreateNewJob</span><span class="hljs-params">(job *models.Job)</span> <span class="hljs-title">error</span></span> {
    <span class="hljs-comment">// first putting this job in db</span>
    query := <span class="hljs-string">`INSERT INTO jobs(cron_expr, type, payload)
              VALUES ($1, $2, $3) RETURNING id;
    `</span>
    payloadJSON, _ := json.Marshal(job.Payload)
    <span class="hljs-keyword">var</span> jobId <span class="hljs-keyword">int</span>
    <span class="hljs-keyword">if</span> err := s.db.QueryRow(query, job.CronExpr, job.Type, payloadJSON).Scan(&amp;jobId); err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> err
    }
    job.Id = jobId

    <span class="hljs-comment">// scheduling a cron job</span>
    id, err := s.cron.AddFunc(job.CronExpr, <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span></span> {
        log.Println(<span class="hljs-string">"running cron job: publishing to rabbitmq"</span>)
        q, err := s.mqChannel.QueueDeclare(<span class="hljs-string">"cron_events"</span>, <span class="hljs-literal">false</span>, <span class="hljs-literal">false</span>, <span class="hljs-literal">false</span>, <span class="hljs-literal">false</span>, <span class="hljs-literal">nil</span>)
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            log.Println(<span class="hljs-string">"failed creating a queue for rabbitmq: "</span>, err.Error())
            <span class="hljs-keyword">return</span>
        }

        ctx, cancel := context.WithTimeout(context.Background(), <span class="hljs-number">5</span>*time.Second)
        <span class="hljs-keyword">defer</span> cancel()

        jsonBody, err := json.Marshal(<span class="hljs-keyword">map</span>[<span class="hljs-keyword">string</span>]any{
            <span class="hljs-string">"job"</span>:  job,
            <span class="hljs-string">"time"</span>: time.Now().Format(<span class="hljs-string">"2006-01-02T15:04:05.000000-07:00"</span>),
        })
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            log.Println(<span class="hljs-string">"failed creating payload for rabbitmq: "</span>, err.Error())
            <span class="hljs-keyword">return</span>
        }

        <span class="hljs-keyword">if</span> err := s.mqChannel.PublishWithContext(ctx, <span class="hljs-string">""</span>, q.Name, <span class="hljs-literal">false</span>, <span class="hljs-literal">false</span>, amqp091.Publishing{
            ContentType: <span class="hljs-string">"application/json"</span>,
            Body:        jsonBody,
        }); err != <span class="hljs-literal">nil</span> {
            log.Println(<span class="hljs-string">"failed publishing to rabbitmq: "</span>, err.Error())
            <span class="hljs-keyword">return</span>
        }
        log.Println(<span class="hljs-string">"event published to rabbitmq!"</span>)

    })
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-comment">// on cron scheduling error, deleting the job created in db</span>
        delQuery := <span class="hljs-string">`DELETE FROM jobs WHERE id = $1`</span>
        s.db.Exec(delQuery, jobId)
        <span class="hljs-keyword">return</span> err
    }

    updateQuery := <span class="hljs-string">`UPDATE jobs SET cron_id = $1 WHERE id = $2`</span>
    s.db.Exec(updateQuery, id, jobId)
    job.CronId = <span class="hljs-keyword">int</span>(id)
    <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>
}
</code></pre>
<p>This is a service function that we have created for our <code>ApiService</code> and called from <code>/api/create</code> API’s handler.<br />So, we first insert the job into the <code>jobs</code> table</p>
<pre><code class="lang-go">query := <span class="hljs-string">`INSERT INTO jobs(cron_expr, type, payload)
              VALUES ($1, $2, $3) RETURNING id;
    `</span>
    payloadJSON, _ := json.Marshal(job.Payload)
    <span class="hljs-keyword">var</span> jobId <span class="hljs-keyword">int</span>
    <span class="hljs-keyword">if</span> err := s.db.QueryRow(query, job.CronExpr, job.Type, payloadJSON).Scan(&amp;jobId); err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> err
    }
    job.Id = jobId
</code></pre>
<p>After inserting, we update <code>id</code> of the <code>job</code> variable.</p>
<p>Then we schedule the cron job using <a target="_blank" href="https://github.com/robfig/cron"><code>robfig/cron</code></a> package. We have loaded the cron instance from this package into our <code>ApiService</code>.</p>
<p>The cron job is scheduled using <a target="_blank" href="https://pkg.go.dev/github.com/robfig/cron/v3@v3.0.1#Cron.AddFunc"><code>AddFunc()</code></a> function from the package, which takes two parameters, the cron expression and the function to run for this cron.</p>
<p>The cron expression we use is:</p>
<pre><code class="lang-plaintext">x y * * *
</code></pre>
<p>Where <code>x</code> is minute and <code>y</code> is hour to repeat every day. This cron expression is inside our <code>job</code> model which we access as <code>job.CronExpr</code>.</p>
<pre><code class="lang-go"><span class="hljs-comment">// scheduling a cron job</span>
    id, err := s.cron.AddFunc(job.CronExpr, <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span></span> {
        log.Println(<span class="hljs-string">"running cron job: publishing to rabbitmq"</span>)
        q, err := s.mqChannel.QueueDeclare(<span class="hljs-string">"cron_events"</span>, <span class="hljs-literal">false</span>, <span class="hljs-literal">false</span>, <span class="hljs-literal">false</span>, <span class="hljs-literal">false</span>, <span class="hljs-literal">nil</span>)
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            log.Println(<span class="hljs-string">"failed creating a queue for rabbitmq: "</span>, err.Error())
            <span class="hljs-keyword">return</span>
        }
</code></pre>
<p>Inside the function that will be invoked at cron time,</p>
<p>We create a queue for the RabbitMQ using <a target="_blank" href="https://github.com/rabbitmq/amqp091-go"><code>rabbitmq/amqp091-go</code></a> package’s <a target="_blank" href="https://pkg.go.dev/github.com/rabbitmq/amqp091-go@v1.10.0#Channel.QueueDeclare"><code>QueueDeclare()</code></a> function.<br />We have stored the RabbitMQ’s channel inside our <code>ApiService</code> as <code>mqChannel</code> which is of type <code>*amqp091.Channel</code>.</p>
<p>So we create the queue by passing name of the queue which in this case is <code>cron_events</code>, and then check for any errors in creating of the queue.</p>
<p>This creates the queue(if not already exists) on which the consumer will get to consume the messages. Then we prepare the payload to send in the message over queue.</p>
<pre><code class="lang-go">ctx, cancel := context.WithTimeout(context.Background(), <span class="hljs-number">5</span>*time.Second)
<span class="hljs-keyword">defer</span> cancel()

jsonBody, err := json.Marshal(<span class="hljs-keyword">map</span>[<span class="hljs-keyword">string</span>]any{
    <span class="hljs-string">"job"</span>:  job,
    <span class="hljs-string">"time"</span>: time.Now().Format(<span class="hljs-string">"2006-01-02T15:04:05.000000-07:00"</span>),
})
<span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
    log.Println(<span class="hljs-string">"failed creating payload for rabbitmq: "</span>, err.Error())
    <span class="hljs-keyword">return</span>
}
</code></pre>
<p>Here, we first created a context with 5 seconds timeout to be used for publishing the message to queue. It is important in case it takes more than 5 seconds to publish the message.</p>
<p>Then we create a json with <code>job</code> key that will store our job model’s json data and <code>time</code> key which stores the current schedule time of this cron job.</p>
<p>Then we check for any errors in creating the payload body and continue to publish the message</p>
<pre><code class="lang-go"><span class="hljs-keyword">if</span> err := s.mqChannel.PublishWithContext(ctx, <span class="hljs-string">""</span>, q.Name, <span class="hljs-literal">false</span>, <span class="hljs-literal">false</span>, amqp091.Publishing{
    ContentType: <span class="hljs-string">"application/json"</span>,
    Body:        jsonBody,
}); err != <span class="hljs-literal">nil</span> {
    log.Println(<span class="hljs-string">"failed publishing to rabbitmq: "</span>, err.Error())
    <span class="hljs-keyword">return</span>
}
</code></pre>
<p>We publish the message to queue using <a target="_blank" href="https://pkg.go.dev/github.com/rabbitmq/amqp091-go@v1.10.0#Channel.PublishWithContext"><code>PublishWithContext()</code></a> function on <code>mqChannel</code> and pass the timeout context, keep the default <code>““</code> exchange and provide queue’s name.</p>
<p>In this, we have also passed our payload using <code>amqp091.Publishing{}</code> where we have mentioned the <code>ContentType</code> and <code>Body</code>.</p>
<p>We then check for any errors in publishing the message to queue.</p>
<p>Then after scheduling the cron job and defining the function to run for cron, we check for any errors in scheduling of cron job</p>
<pre><code class="lang-go"><span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
    <span class="hljs-comment">// on cron scheduling error, deleting the job created in db</span>
    delQuery := <span class="hljs-string">`DELETE FROM jobs WHERE id = $1`</span>
    s.db.Exec(delQuery, jobId)
    <span class="hljs-keyword">return</span> err
}
</code></pre>
<p>If there were errors, then we delete the DB entry we made for this job earlier.</p>
<pre><code class="lang-go">updateQuery := <span class="hljs-string">`UPDATE jobs SET cron_id = $1 WHERE id = $2`</span>
s.db.Exec(updateQuery, id, jobId)
job.CronId = <span class="hljs-keyword">int</span>(id)
<span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>
</code></pre>
<p>If there were no errors, we finally update the <code>cron_id</code> of our job in the DB’s entry.</p>
<h3 id="heading-listening-on-the-rabbitmqs-job-queue-amp-consuming-jobs">Listening on the RabbitMQ’s Job Queue &amp; Consuming Jobs</h3>
<p>In consumer service’s <code>rabbitmqService.go</code> file, which is located at <a target="_blank" href="https://github.com/Aniketyadav44/cronflow/blob/main/consumer/internal/services/rabbitmqService.go"><code>/consumer/internal/services/rabbitmqService.go</code></a></p>
<p>We have first created a <code>RMQService</code> struct and a <code>NewRMQService()</code> constructor.</p>
<pre><code class="lang-go"><span class="hljs-keyword">type</span> RMQService <span class="hljs-keyword">struct</span> {
    dbService *DBService
    conn      *amqp091.Connection
    channel   *amqp091.Channel
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">NewRMQPService</span><span class="hljs-params">(db *sql.DB, conn *amqp091.Connection, channel *amqp091.Channel)</span> *<span class="hljs-title">RMQService</span></span> {
    <span class="hljs-keyword">return</span> &amp;RMQService{
        dbService: NewDBService(db),
        conn:      conn,
        channel:   channel,
    }
}
</code></pre>
<p>This struct holds the db connection <code>dbService</code>, the RabbitMQ’s connection <code>conn</code> and the RabbitMQ’s channel <code>channel</code>.</p>
<p>In the <code>main.go</code>'s main function, we create a new rabbitmq service’s instance</p>
<pre><code class="lang-go">rabbitmqService := services.NewRMQService(cfg.Db, cfg.RabbitMQ, cfg.MQChannel)
ctx, cancel := context.WithCancel(context.Background())
<span class="hljs-keyword">defer</span> cancel()
rabbitmqService.Start(ctx)
</code></pre>
<p>After creating the rabbitmq service’s instance, we define a <code>WithCancel</code> context and call its <code>cancel()</code> function on defer of main function to gracefully close the RabbitMQ’s connection when the consumer service shuts down. Then we call the <a target="_blank" href="https://github.com/Aniketyadav44/cronflow/blob/d19f24d5fbfbfb6c4f988722d609c8ba4ecd89ac/consumer/internal/services/rabbitmqService.go#L31"><code>Start()</code></a> function.</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(s *RMQService)</span> <span class="hljs-title">Start</span><span class="hljs-params">(ctx context.Context)</span></span> {
    q, err := s.channel.QueueDeclare(<span class="hljs-string">"cron_events"</span>, <span class="hljs-literal">false</span>, <span class="hljs-literal">false</span>, <span class="hljs-literal">false</span>, <span class="hljs-literal">false</span>, <span class="hljs-literal">nil</span>)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Println(<span class="hljs-string">"error in creating rabbitmq queue: "</span>, err.Error())
        <span class="hljs-keyword">return</span>
    }

    msgs, err := s.channel.Consume(q.Name, <span class="hljs-string">""</span>, <span class="hljs-literal">false</span>, <span class="hljs-literal">false</span>, <span class="hljs-literal">false</span>, <span class="hljs-literal">false</span>, <span class="hljs-literal">nil</span>)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Println(<span class="hljs-string">"error in creating a consume channel for rabbitmq: "</span>, err.Error())
        <span class="hljs-keyword">return</span>
    }

    <span class="hljs-keyword">go</span> <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span></span> {
        <span class="hljs-keyword">for</span> {
            <span class="hljs-keyword">select</span> {
            <span class="hljs-keyword">case</span> &lt;-ctx.Done():
                log.Println(<span class="hljs-string">"Stopping rabbitmq..."</span>)
                <span class="hljs-keyword">return</span>
            <span class="hljs-keyword">case</span> msg, ok := &lt;-msgs:
                <span class="hljs-keyword">if</span> !ok {
                    log.Println(<span class="hljs-string">"RabbitMQ message channel is closed."</span>)
                    <span class="hljs-keyword">return</span>
                }
                processMessage(&amp;msg, s.dbService)
            }
        }
    }()
    log.Println(<span class="hljs-string">"RabbitMQ Consumer Running: Waiting for messages..."</span>)

    &lt;-ctx.Done()
}
</code></pre>
<p>In Start function, we first declare the queue using <a target="_blank" href="https://pkg.go.dev/github.com/rabbitmq/amqp091-go@v1.10.0#Channel.QueueDeclare"><code>QueueDeclare()</code></a> and then call the <a target="_blank" href="https://pkg.go.dev/github.com/rabbitmq/amqp091-go@v1.10.0#Channel.Consume"><code>Consume()</code></a> function.</p>
<pre><code class="lang-go">msgs, err := s.channel.Consume(q.Name, <span class="hljs-string">""</span>, <span class="hljs-literal">false</span>, <span class="hljs-literal">false</span>, <span class="hljs-literal">false</span>, <span class="hljs-literal">false</span>, <span class="hljs-literal">nil</span>)
<span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
    log.Println(<span class="hljs-string">"error in creating a consume channel for rabbitmq: "</span>, err.Error())
    <span class="hljs-keyword">return</span>
}
</code></pre>
<p>In this, we pass the queue’s name and mentioned autoAck as false. So that we can manually <code>ACK</code> and <code>NACK</code> the job messages for retries.</p>
<pre><code class="lang-go"><span class="hljs-keyword">go</span> <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span></span> {
    <span class="hljs-keyword">for</span> {
        <span class="hljs-keyword">select</span> {
        <span class="hljs-keyword">case</span> &lt;-ctx.Done():
            log.Println(<span class="hljs-string">"Stopping rabbitmq..."</span>)
            <span class="hljs-keyword">return</span>
        <span class="hljs-keyword">case</span> msg, ok := &lt;-msgs:
            <span class="hljs-keyword">if</span> !ok {
                log.Println(<span class="hljs-string">"RabbitMQ message channel is closed."</span>)
                <span class="hljs-keyword">return</span>
            }
            processMessage(&amp;msg, s.dbService)
        }
    }
}()
log.Println(<span class="hljs-string">"RabbitMQ Consumer Running: Waiting for messages..."</span>)

&lt;-ctx.Done()
</code></pre>
<p>Then we run a goroutine in which we run an infinite for loop which basically has a <code>select</code> statement which is waiting on the cancel context’s cannel and RabbitMQ’s consume channel.</p>
<p>When the application shuts down, <code>close()</code> of the context is called and a value is received on <code>←ctx.Done()</code> which returns from this function to close the RabbitMQ’s connection.</p>
<p>When a message is published on the queue by the <code>scheduler</code> service, that message is received on <code>msgs</code> channel which is fetched by <code>←msgs</code> case and the message is stored in <code>msg</code></p>
<p>Then we call the <a target="_blank" href="https://github.com/Aniketyadav44/cronflow/blob/d19f24d5fbfbfb6c4f988722d609c8ba4ecd89ac/consumer/internal/services/rabbitmqService.go#L67"><code>processMessage()</code></a> function to further parse the job’s payload and process the job.</p>
<h3 id="heading-processing-the-job">Processing the Job</h3>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">processMessage</span><span class="hljs-params">(msg *amqp091.Delivery, dbService *DBService)</span></span> {
    log.Println(<span class="hljs-string">"Received message on RabbitMQ channel: "</span>, <span class="hljs-keyword">string</span>(msg.Body))

    <span class="hljs-comment">// parsing message body which has keys "job"[Job json] and "time"[Schedule time string]</span>
    <span class="hljs-keyword">var</span> body <span class="hljs-keyword">map</span>[<span class="hljs-keyword">string</span>]any
    <span class="hljs-keyword">if</span> err := json.Unmarshal(msg.Body, &amp;body); err != <span class="hljs-literal">nil</span> {
        log.Println(<span class="hljs-string">"error in extracting message payload: "</span>, err.Error())
        msg.Ack(<span class="hljs-literal">false</span>)
        <span class="hljs-keyword">return</span>
    }

    <span class="hljs-comment">// parsing job json from the message body json</span>
    jobBody, ok := body[<span class="hljs-string">"job"</span>].(<span class="hljs-keyword">map</span>[<span class="hljs-keyword">string</span>]any)
    <span class="hljs-keyword">if</span> !ok {
        log.Println(<span class="hljs-string">"error in extracting job json: "</span>, body[<span class="hljs-string">"job"</span>])
        msg.Ack(<span class="hljs-literal">false</span>)
        <span class="hljs-keyword">return</span>
    }
    <span class="hljs-comment">// parsing scheduled time from the message body json</span>
    sTime, ok := body[<span class="hljs-string">"time"</span>].(<span class="hljs-keyword">string</span>)
    <span class="hljs-keyword">if</span> !ok {
        log.Println(<span class="hljs-string">"error in extracting scheduled time: "</span>, body[<span class="hljs-string">"time"</span>])
        msg.Ack(<span class="hljs-literal">false</span>)
        <span class="hljs-keyword">return</span>
    }

    <span class="hljs-comment">// converting the job json to bytes, to convert it to models.Job</span>
    jobBodyByte, err := json.Marshal(jobBody)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Println(<span class="hljs-string">"error in parsing job json: "</span>, err.Error())
        msg.Ack(<span class="hljs-literal">false</span>)
        <span class="hljs-keyword">return</span>
    }
    <span class="hljs-keyword">var</span> job *models.Job
    <span class="hljs-keyword">if</span> err := json.Unmarshal(jobBodyByte, &amp;job); err != <span class="hljs-literal">nil</span> {
        log.Println(<span class="hljs-string">"error in extracting job: "</span>, err.Error())
        msg.Ack(<span class="hljs-literal">false</span>)
        <span class="hljs-keyword">return</span>
    }

    <span class="hljs-comment">// get retries of any existing job entry for this job id, scheduled time which was not failed</span>
    <span class="hljs-keyword">var</span> jobEntry *models.JobEntry
    j, err := dbService.getExistingJobEntry(job, sTime)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Println(<span class="hljs-string">"error in getting a job entry: "</span>, err.Error())
        msg.Ack(<span class="hljs-literal">false</span>)
        <span class="hljs-keyword">return</span>
    }
    <span class="hljs-keyword">if</span> j != <span class="hljs-literal">nil</span> {
        <span class="hljs-comment">// if a job entry already exists, update that in jobEntry variable</span>
        jobEntry = j
    } <span class="hljs-keyword">else</span> {
        createdEntry, err := dbService.createNewJobEntry(job, sTime)
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            log.Println(<span class="hljs-string">"error creating new entry in db: "</span>, err.Error())
            msg.Ack(<span class="hljs-literal">false</span>)
            <span class="hljs-keyword">return</span>
        }
        jobEntry = createdEntry
    }

    <span class="hljs-comment">// checking if the retry count reached max retries</span>
    <span class="hljs-keyword">if</span> jobEntry.Retries &gt;= MaxJobRetries {
        dbService.markJobAsPermanentlyFailed(jobEntry)
        msg.Ack(<span class="hljs-literal">false</span>)
        <span class="hljs-keyword">return</span>
    }

    <span class="hljs-keyword">switch</span> job.Type {
    <span class="hljs-keyword">case</span> <span class="hljs-string">"ping"</span>:
        <span class="hljs-keyword">if</span> err := processPingJob(dbService, job, jobEntry, sTime); err != <span class="hljs-literal">nil</span> {
            handleJobError(dbService, err, msg, jobEntry)
            <span class="hljs-keyword">return</span>
        }
    <span class="hljs-keyword">case</span> <span class="hljs-string">"email"</span>:
        <span class="hljs-keyword">if</span> err := processEmailJob(dbService, job, jobEntry, sTime); err != <span class="hljs-literal">nil</span> {
            handleJobError(dbService, err, msg, jobEntry)
            <span class="hljs-keyword">return</span>
        }
    <span class="hljs-keyword">case</span> <span class="hljs-string">"slack"</span>:
        <span class="hljs-keyword">if</span> err := processSlackJob(dbService, job, jobEntry, sTime); err != <span class="hljs-literal">nil</span> {
            handleJobError(dbService, err, msg, jobEntry)
            <span class="hljs-keyword">return</span>
        }
    <span class="hljs-keyword">case</span> <span class="hljs-string">"webhook"</span>:
        <span class="hljs-keyword">if</span> err := processWebhookJob(dbService, job, jobEntry, sTime); err != <span class="hljs-literal">nil</span> {
            handleJobError(dbService, err, msg, jobEntry)
            <span class="hljs-keyword">return</span>
        }
    <span class="hljs-keyword">default</span>:
        handleJobError(dbService, fmt.Errorf(<span class="hljs-string">"invalid event type: %s"</span>, job.Type), msg, jobEntry)
    }

    msg.Ack(<span class="hljs-literal">false</span>)
}
</code></pre>
<p>In this function, we first get the json body from message’s payload. Then we extract the job’s json from this body stored at key <code>job</code> and scheduled time store at key <code>time</code>.</p>
<p>Then we encode the job’s json to convert it to a <code>Job</code> type variable stored in <code>job *models.Job</code>.</p>
<p>If there are errors at any of the parsing step, we acknowledge the message using <code>Ack(false)</code>. We pass multiple as false in Ack to avoid acknowledging any other prior deliveries.</p>
<p>We do this, because we can’t process the job further and have to remove this message from the queue by acknowledging.</p>
<pre><code class="lang-go"><span class="hljs-comment">// get retries of any existing job entry for this job id, scheduled time which was not failed</span>
<span class="hljs-keyword">var</span> jobEntry *models.JobEntry
j, err := dbService.getExistingJobEntry(job, sTime)
<span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
    log.Println(<span class="hljs-string">"error in getting a job entry: "</span>, err.Error())
    msg.Ack(<span class="hljs-literal">false</span>)
    <span class="hljs-keyword">return</span>
}
<span class="hljs-keyword">if</span> j != <span class="hljs-literal">nil</span> {
    <span class="hljs-comment">// if a job entry already exists, update that in jobEntry variable</span>
    jobEntry = j
} <span class="hljs-keyword">else</span> {
    createdEntry, err := dbService.createNewJobEntry(job, sTime)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Println(<span class="hljs-string">"error creating new entry in db: "</span>, err.Error())
        msg.Ack(<span class="hljs-literal">false</span>)
        <span class="hljs-keyword">return</span>
    }
    jobEntry = createdEntry
}
</code></pre>
<p>Then we call <a target="_blank" href="https://github.com/Aniketyadav44/cronflow/blob/d19f24d5fbfbfb6c4f988722d609c8ba4ecd89ac/consumer/internal/services/dbService.go#L21"><code>getExistingJobEntry()</code></a> function from dbService to check if there were any previous execution trials for this job at that specific scheduled time. If there was prior execution and this is a retry round, then we initiate the <code>jobEntry</code> variable.</p>
<p>If there were no previous execution, we create a new job entry with <code>running</code> status for that scheduled time in database using <a target="_blank" href="https://github.com/Aniketyadav44/cronflow/blob/d19f24d5fbfbfb6c4f988722d609c8ba4ecd89ac/consumer/internal/services/dbService.go#L39"><code>createNewJobEntry()</code></a> function and put it’s value in <code>jobEntry</code> variable.</p>
<pre><code class="lang-go"><span class="hljs-comment">// checking if the retry count reached max retries</span>
<span class="hljs-keyword">if</span> jobEntry.Retries &gt;= MaxJobRetries {
    dbService.markJobAsPermanentlyFailed(jobEntry)
    msg.Ack(<span class="hljs-literal">false</span>)
    <span class="hljs-keyword">return</span>
}
</code></pre>
<p>Then we check in case of retrying job, if the retry count has exceeded <code>MaxJobRetries</code> i.e 3.</p>
<p>If yes, then we update that job with <code>permanently_failed</code> status in database for that scheduled time using <a target="_blank" href="https://github.com/Aniketyadav44/cronflow/blob/d19f24d5fbfbfb6c4f988722d609c8ba4ecd89ac/consumer/internal/services/dbService.go#L69"><code>markJobAsPermanentlyFailed()</code></a> function and acknowledge the message to remove it from the queue and finally return from the function.</p>
<pre><code class="lang-go"><span class="hljs-keyword">switch</span> job.Type {
<span class="hljs-keyword">case</span> <span class="hljs-string">"ping"</span>:
    <span class="hljs-keyword">if</span> err := processPingJob(dbService, job, jobEntry, sTime); err != <span class="hljs-literal">nil</span> {
        handleJobError(dbService, err, msg, jobEntry)
        <span class="hljs-keyword">return</span>
    }
<span class="hljs-keyword">case</span> <span class="hljs-string">"email"</span>:
    <span class="hljs-keyword">if</span> err := processEmailJob(dbService, job, jobEntry, sTime); err != <span class="hljs-literal">nil</span> {
        handleJobError(dbService, err, msg, jobEntry)
        <span class="hljs-keyword">return</span>
    }
<span class="hljs-keyword">case</span> <span class="hljs-string">"slack"</span>:
    <span class="hljs-keyword">if</span> err := processSlackJob(dbService, job, jobEntry, sTime); err != <span class="hljs-literal">nil</span> {
        handleJobError(dbService, err, msg, jobEntry)
        <span class="hljs-keyword">return</span>
    }
<span class="hljs-keyword">case</span> <span class="hljs-string">"webhook"</span>:
    <span class="hljs-keyword">if</span> err := processWebhookJob(dbService, job, jobEntry, sTime); err != <span class="hljs-literal">nil</span> {
        handleJobError(dbService, err, msg, jobEntry)
        <span class="hljs-keyword">return</span>
    }
<span class="hljs-keyword">default</span>:
    handleJobError(dbService, fmt.Errorf(<span class="hljs-string">"invalid event type: %s"</span>, job.Type), msg, jobEntry)
}
</code></pre>
<p>Then we switch-case on the <code>job.Type</code> and call further task process functions based on the job type.</p>
<p>If we get error from the task process functions, we call <a target="_blank" href="https://github.com/Aniketyadav44/cronflow/blob/d19f24d5fbfbfb6c4f988722d609c8ba4ecd89ac/consumer/internal/services/rabbitmqService.go#L166"><code>handleJobError()</code></a> function and return.</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">handleJobError</span><span class="hljs-params">(dbService *DBService, err error, msg *amqp091.Delivery, jobEntry *models.JobEntry)</span></span> {
    log.Println(<span class="hljs-string">"error in processing job: "</span>, err.Error(), <span class="hljs-string">", retries: "</span>, jobEntry.Retries)
    time.Sleep(<span class="hljs-number">2</span> * time.Second)
    dbService.markJobAsFailed(err, jobEntry.Retries+<span class="hljs-number">1</span>, jobEntry)
    msg.Nack(<span class="hljs-literal">false</span>, <span class="hljs-literal">true</span>)
}
</code></pre>
<p>In <code>handleJobError()</code> function, we sleep for 2 seconds before retrying.</p>
<p>We mark the job run entry as <code>failed</code> in database using <a target="_blank" href="https://github.com/Aniketyadav44/cronflow/blob/d19f24d5fbfbfb6c4f988722d609c8ba4ecd89ac/consumer/internal/services/dbService.go#L75"><code>markJobAsFailed()</code></a> function. The job entry’s status will move to <code>permanently_failed</code> if it fails all 3 retries.</p>
<p>Then we negative acknowledge <code>Nack()</code> the message and pass requeue as true to put this message back into the queue.</p>
<p>On success processing of the job inside of the task processing function, we use <a target="_blank" href="https://github.com/Aniketyadav44/cronflow/blob/d19f24d5fbfbfb6c4f988722d609c8ba4ecd89ac/consumer/internal/services/dbService.go#L63"><code>markJobAsCompleted()</code></a> function from dbService to update the job entry status to <code>completed</code>.</p>
<pre><code class="lang-go">msg.Ack(<span class="hljs-literal">false</span>)
</code></pre>
<p>Then finally in end of <code>processMessage()</code> function, we <code>Ack()</code> the message.<br />In this way, the processing of job is done after it was consumed!</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this blog, we have learned about how we can schedule cron jobs and publish them to RabbitMQ’s queue. And how these messages can be consumed from the queue.</p>
<p>And we also learned, after consuming how we can acknowledge and retry on job failures.</p>
<p>This blog contained parts from the full project I created to also provide the dashboard for creating jobs, listing all jobs and view the job run statuses as <code>running</code>/<code>completed</code>/<code>failed</code>(for retries)/<code>permanently_failed</code>.</p>
<p>Make sure to give it a go:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/Aniketyadav44/cronflow/tree/main">https://github.com/Aniketyadav44/cronflow/tree/main</a></div>
<p> </p>
<hr />
<p>If you find this article helpful, don't forget to hit the ❤️ button.</p>
<p>Check out my website <a target="_blank" href="https://anikety.com/"><strong>here</strong></a> and feel free to connect.</p>
<p>Happy Coding! 👨‍💻</p>
]]></content:encoded></item><item><title><![CDATA[Effective Project Structure for Backend Projects in Go]]></title><description><![CDATA[Introduction
This article provides an optimal and modular project structure for backend projects in Golang, which I use as a starting template for my projects to ensure they are scalable and well maintained.
Along with the structure, it also includes...]]></description><link>https://blog.anikety.com/go-backend-project-structure</link><guid isPermaLink="true">https://blog.anikety.com/go-backend-project-structure</guid><category><![CDATA[golang]]></category><category><![CDATA[Golang developer]]></category><category><![CDATA[project structure]]></category><category><![CDATA[backend]]></category><category><![CDATA[Databases]]></category><dc:creator><![CDATA[Aniket Yadav]]></dc:creator><pubDate>Sat, 07 Jun 2025 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750661577245/bcaca2a7-3711-45ca-8ace-1c691106dd98.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>This article provides an optimal and modular project structure for backend projects in Golang, which I use as a starting template for my projects to ensure they are scalable and well maintained.</p>
<p>Along with the structure, it also includes a starter codebase built using <code>gin</code> framework, which can be used as a template.</p>
<h2 id="heading-why-project-structure-matters">Why Project Structure Matters?</h2>
<p>Designing the structure of a project is important right from the beginning of the development. As the project grows, so does the codebase &amp; complexity, which needs to be distributed properly across well-thought folders and files.</p>
<p>A clear structure makes it easy for the developers to navigate through the code as it grows and also lets future developers focus on implementing new features rather than spending time searching for relevant code.</p>
<h2 id="heading-folder-structure">Folder Structure</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750654020754/aaf0d38c-365f-4f2d-a3a3-efe17b13c763.png" alt="folder structure" class="image--center mx-auto" /></p>
<h3 id="heading-api"><code>/api</code></h3>
<p>The <code>api</code> folder consists of sub-folders to maintain the versioning of API to manage changes without breaking existing clients using previous versions.</p>
<pre><code class="lang-plaintext">├── api/
│   └── v1/
│       └── routes.go 
│       └── userRoutes.go
</code></pre>
<p>The sub-folders for versions named as <code>v1</code>, <code>v2</code>, etc contains <code>routes.go</code> files which basically registers different routes divided into different files based on entity/features.</p>
<pre><code class="lang-go"><span class="hljs-comment">// routes.go</span>
<span class="hljs-keyword">package</span> v1

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"database/sql"</span>

    <span class="hljs-string">"github.com/gin-gonic/gin"</span>
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">RegisterRoutes</span><span class="hljs-params">(router *gin.Engine, db *sql.DB)</span></span> {
    registerUserRoutes(router, db)
}
</code></pre>
<pre><code class="lang-go"><span class="hljs-comment">// userRoutes.go</span>
<span class="hljs-keyword">package</span> v1

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"database/sql"</span>

    <span class="hljs-string">"github.com/Aniketyadav44/go-backend-template/internal/handlers"</span>
    <span class="hljs-string">"github.com/Aniketyadav44/go-backend-template/internal/services"</span>
    <span class="hljs-string">"github.com/gin-gonic/gin"</span>
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">registerUserRoutes</span><span class="hljs-params">(router *gin.Engine, db *sql.DB)</span></span> {
    userService := services.NewUserService(db)
    userHandler := handlers.NewUserHandler(userService)

    v1 := router.Group(<span class="hljs-string">"/api/v1"</span>)
    {
        v1.GET(<span class="hljs-string">"/users"</span>, userHandler.GetAllUsers)
    }
}
</code></pre>
<p>Here, we basically created a <code>service</code> and <code>handler</code> instance by first injecting the database dependency in the service and then passing this service to the handler.</p>
<p>Then we defined a route group for <code>/api/v1</code> path named as <code>v1</code> and on this, we register our endpoint methods using the respective handler functions.</p>
<h3 id="heading-cmd"><code>/cmd</code></h3>
<p>The <code>cmd</code> folder is used as the entry point of our backend application, which contains sub-folders depending on the type of application we are building.</p>
<p>e.g <code>cmd/api</code> for our REST API or <code>cmd/grpc</code> for a grpc service or <code>cmd/cli</code> for an CLI application.</p>
<p>These sub-folders have their <code>main.go</code> file which is the exact entry point file of the project.</p>
<pre><code class="lang-plaintext">├── cmd/
│   └── api/
│       └── main.go
</code></pre>
<pre><code class="lang-go"><span class="hljs-comment">// main.go</span>
<span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"log"</span>

    v1 <span class="hljs-string">"github.com/Aniketyadav44/go-backend-template/api/v1"</span>
    <span class="hljs-string">"github.com/Aniketyadav44/go-backend-template/internal/config"</span>
    <span class="hljs-string">"github.com/gin-gonic/gin"</span>
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    cfg, err := config.LoadConfig()
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatal(<span class="hljs-string">"error in loading config: "</span>, err)
    }

    router := gin.Default()
    v1.RegisterRoutes(router, cfg.DB)

    <span class="hljs-keyword">if</span> err := router.Run(<span class="hljs-string">":"</span> + cfg.Port); err != <span class="hljs-literal">nil</span> {
        log.Fatal(<span class="hljs-string">"error in starting server: "</span>, err)
    }
}
</code></pre>
<p>Here, we first load our config instance which provides the port and database instance for this demo.</p>
<p>Next we create a Gin router instance and then we register our API routes from the <code>v1</code> package in <code>api/v1</code> on this router.</p>
<p>Finally, we start the server using <code>router.Run()</code> binding it to the port we loaded from <code>.env</code> in config.</p>
<h3 id="heading-internal"><code>/internal</code></h3>
<p>the <code>internal</code> folder consists of the application level logic further divided into sub folders depending on the configuration, control(presentation), business logic, data models and middlewares.</p>
<pre><code class="lang-plaintext">├── internal/
│   ├── config/
│   │   └── config.go
│   │   └── db.go              
│   ├── handlers/
│   │   └── userHandler.go              
│   ├── services/
│   │   └── userService.go              
│   ├── models/
│   │   └── userModel.go               
│   └── middleware/
│       └── authMiddleware.go
</code></pre>
<h3 id="heading-internalconfig"><code>/internal/config</code></h3>
<p>the <code>config</code> sub-folder in <code>internal</code> folder contains files for the <code>config.go</code> file which loads the environment variables and initializes our database instance.</p>
<pre><code class="lang-go"><span class="hljs-comment">// config.go</span>
<span class="hljs-keyword">package</span> config

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"database/sql"</span>
    <span class="hljs-string">"os"</span>

    <span class="hljs-string">"github.com/joho/godotenv"</span>
    _ <span class="hljs-string">"github.com/lib/pq"</span>
)

<span class="hljs-keyword">type</span> Config <span class="hljs-keyword">struct</span> {
    Port <span class="hljs-keyword">string</span>
    DB   *sql.DB
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">LoadConfig</span><span class="hljs-params">()</span> <span class="hljs-params">(*Config, error)</span></span> {
    <span class="hljs-keyword">if</span> err := godotenv.Load(); err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
    }

    port := getEnvKey(<span class="hljs-string">"PORT"</span>, <span class="hljs-string">""</span>)
    dbHost := getEnvKey(<span class="hljs-string">"DB_HOST"</span>, <span class="hljs-string">""</span>)
    dbPort := getEnvKey(<span class="hljs-string">"DB_PORT"</span>, <span class="hljs-string">""</span>)
    dbUser := getEnvKey(<span class="hljs-string">"DB_USERNAME"</span>, <span class="hljs-string">""</span>)
    dbPassword := getEnvKey(<span class="hljs-string">"DB_PASSWORD"</span>, <span class="hljs-string">""</span>)
    dbName := getEnvKey(<span class="hljs-string">"DB_NAME"</span>, <span class="hljs-string">""</span>)

    db, err := loadDb(dbHost, dbPort, dbUser, dbPassword, dbName)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
    }

    <span class="hljs-keyword">return</span> &amp;Config{
        Port: port,
        DB:   db,
    }, <span class="hljs-literal">nil</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">getEnvKey</span><span class="hljs-params">(key, defaultValue <span class="hljs-keyword">string</span>)</span> <span class="hljs-title">string</span></span> {
    <span class="hljs-keyword">if</span> val, exists := os.LookupEnv(key); exists {
        <span class="hljs-keyword">return</span> val
    }
    <span class="hljs-keyword">return</span> defaultValue
}
</code></pre>
<p>In this, we have defined a <code>Config</code> struct which holds the application level configurations. Specifically in this demo template, there is our database connection instance(<code>*sql.DB</code>) and server port.</p>
<p>Then, there is a function <code>LoadConfig()</code> which returns the loaded <code>Config</code> along with any error. This function first loads the environment using <code>godotenv</code> package and then loads database connection using <code>loadDb()</code> function defined in <code>/config/db.go</code> file.</p>
<p>In this <code>db.go</code> file, we create our database connection instance using <code>sql</code> package and test our connection using the <code>db.Ping()</code> function</p>
<pre><code class="lang-go"><span class="hljs-comment">// db.go</span>
<span class="hljs-keyword">package</span> config

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"database/sql"</span>
    <span class="hljs-string">"fmt"</span>
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">loadDb</span><span class="hljs-params">(dbHost, dbPort, dbUser, dbPassword, dbName <span class="hljs-keyword">string</span>)</span> <span class="hljs-params">(*sql.DB, error)</span></span> {
    psConnStr := fmt.Sprintf(<span class="hljs-string">"postgres://%s:%s@%s:%s/%s?sslmode=disable"</span>, dbUser, dbPassword, dbHost, dbPort, dbName)

    db, err := sql.Open(<span class="hljs-string">"postgres"</span>, psConnStr)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
    }

    <span class="hljs-keyword">if</span> err := db.Ping(); err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
    }

    <span class="hljs-keyword">return</span> db, <span class="hljs-literal">nil</span>
}
</code></pre>
<p>Just like <code>db.go</code>, it can contain configuration of different databases/services like <code>redis</code>, <code>kafka</code>, etc.</p>
<h3 id="heading-internalhandlers"><code>/internal/handlers</code></h3>
<p>This folder consists of the HTTP handlers which defines how the requests will be processed.</p>
<pre><code class="lang-go"><span class="hljs-comment">// userHandler.go</span>
<span class="hljs-keyword">package</span> handlers

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"net/http"</span>

    <span class="hljs-string">"github.com/Aniketyadav44/go-backend-template/internal/services"</span>
    <span class="hljs-string">"github.com/gin-gonic/gin"</span>
)

<span class="hljs-keyword">type</span> UserHandler <span class="hljs-keyword">struct</span> {
    service *services.UserService
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">NewUserHandler</span><span class="hljs-params">(service *services.UserService)</span> *<span class="hljs-title">UserHandler</span></span> {
    <span class="hljs-keyword">return</span> &amp;UserHandler{
        service: service,
    }
}

<span class="hljs-comment">// ... handler functions defined here</span>
<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(h *UserHandler)</span> <span class="hljs-title">GetAllUsers</span><span class="hljs-params">(c *gin.Context)</span></span> {
    users, err := h.service.GetAllUsers()
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        c.JSON(http.StatusInternalServerError, gin.H{<span class="hljs-string">"error"</span>: err.Error()})
        <span class="hljs-keyword">return</span>
    }

    c.JSON(http.StatusOK, gin.H{<span class="hljs-string">"users"</span>: users})
}
</code></pre>
<p>Here, we have defined a <code>UserHandler</code> struct which has a reference to it’s <code>service</code>. This <code>service</code> contains db connection injected which holds all of the business logic for specific handler functions to be called on specific routes.</p>
<p>Then, a constructor function <code>NewUserHandler()</code> is defined which basically initializes and returns a new handler with service dependency injected.</p>
<h3 id="heading-internalservices"><code>/internal/services</code></h3>
<p>This folder consists of the services which holds all of the business logic for specific handlers.</p>
<pre><code class="lang-go"><span class="hljs-comment">// userService.go</span>
<span class="hljs-keyword">package</span> services

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"database/sql"</span>

    <span class="hljs-string">"github.com/Aniketyadav44/go-backend-template/internal/models"</span>
)

<span class="hljs-keyword">type</span> UserService <span class="hljs-keyword">struct</span> {
    db *sql.DB
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">NewUserService</span><span class="hljs-params">(db *sql.DB)</span> *<span class="hljs-title">UserService</span></span> {
    <span class="hljs-keyword">return</span> &amp;UserService{
        db: db,
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(s *UserService)</span> <span class="hljs-title">GetAllUsers</span><span class="hljs-params">()</span> <span class="hljs-params">([]models.User, error)</span></span> {
    <span class="hljs-comment">// ... get users logic from database connection -&gt; s.db</span>
}
</code></pre>
<p>Here, we have defined <code>UserService</code> struct that holds a reference to the database connection (<code>*sql.DB</code>).</p>
<p>Then, there is a constructor function <code>NewUserService()</code> which initializes and returns new instance of service with the database connection injected.</p>
<h3 id="heading-internalmodels"><code>/internal/models</code></h3>
<p>This folder consists of the different data structures used across our application. This also contains the structures for our request body of POST APIs</p>
<pre><code class="lang-go"><span class="hljs-comment">// userModel.go example</span>
<span class="hljs-keyword">package</span> models

<span class="hljs-keyword">type</span> User <span class="hljs-keyword">struct</span> {
    ID        <span class="hljs-keyword">string</span>    <span class="hljs-string">`json:"id"`</span>
    Username  <span class="hljs-keyword">string</span>    <span class="hljs-string">`json:"username"`</span>
    Email     <span class="hljs-keyword">string</span>    <span class="hljs-string">`json:"email"`</span>
}
</code></pre>
<h3 id="heading-internalmiddlewares"><code>/internal/middlewares</code></h3>
<p>This folder consists of the different middlewares used for our APIs divided into different files.</p>
<pre><code class="lang-go"><span class="hljs-comment">// authMiddleware.go</span>
<span class="hljs-keyword">package</span> middlewares

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"fmt"</span>
    <span class="hljs-string">"net/http"</span>

    <span class="hljs-string">"github.com/gin-gonic/gin"</span>
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">AuthMiddleware</span><span class="hljs-params">()</span> <span class="hljs-title">gin</span>.<span class="hljs-title">HandlerFunc</span></span> {
    <span class="hljs-keyword">return</span> <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(c *gin.Context)</span></span> {
        <span class="hljs-comment">// ...logic</span>
        c.Next()
    }
}
</code></pre>
<h3 id="heading-pkg"><code>/pkg</code></h3>
<p>This folder consists of various sub-folders such as <code>responses</code>, <code>utils</code>, etc which holds all of the re-usable code to be imported across our application.</p>
<p>for e.g.</p>
<ul>
<li><p><code>/pkg/utils/utils.go</code> can consist of common utilities functions</p>
</li>
<li><p><code>/pkg/utils/crypto.go</code> can consist of the encryption, decryption, hashing, etc functions</p>
</li>
<li><p><code>/pkg/responses/errorResponses.go</code> can consist of different error responses to be sent in error conditions.</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Designing a clean and modular structure is very important for a scalable and maintainable codebase. By organizing code files into different logical layers, we make our project easier to understand and extend.</p>
<p>This <code>gin</code> based template can be used to quickly get started with a new project.</p>
<hr />
<p>This template can be found here:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/Aniketyadav44/go-backend-template">https://github.com/Aniketyadav44/go-backend-template</a></div>
<p> </p>
<p>If you find this article helpful, don't forget to hit the ❤️ button.</p>
<p>Check out my website <a target="_blank" href="https://anikety.com"><strong>here</strong></a> and feel free to connect.</p>
<p>Happy Coding! 👨‍💻</p>
]]></content:encoded></item><item><title><![CDATA[Understanding Worker Pools in Go]]></title><description><![CDATA[Introduction
Golang has made it very simple to apply concurrency in our programs and worker pool is one of the amazing concurrency pattern that we can simply create with go.
Definition
The technical definition is: (Yes, I copied this from internet 🙂...]]></description><link>https://blog.anikety.com/understanding-worker-pools-in-go</link><guid isPermaLink="true">https://blog.anikety.com/understanding-worker-pools-in-go</guid><category><![CDATA[Go Language]]></category><category><![CDATA[golang]]></category><category><![CDATA[Golang developer]]></category><category><![CDATA[Worker Thread]]></category><category><![CDATA[worker-pool-pattern-in-go]]></category><dc:creator><![CDATA[Aniket Yadav]]></dc:creator><pubDate>Sat, 24 May 2025 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750755803314/d96e7e4a-2e1f-4464-88ff-e4e97b29312a.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Golang has made it very simple to apply concurrency in our programs and worker pool is one of the amazing concurrency pattern that we can simply create with go.</p>
<h2 id="heading-definition">Definition</h2>
<p>The technical definition is: (Yes, I copied this from internet 🙂)</p>
<blockquote>
<p>A worker pool is a <strong>software design pattern where a set of worker threads or processes (the "pool") are created to concurrently execute tasks from a queue</strong></p>
</blockquote>
<p>In simple terms of golang, worker pool allows us to run multiple tasks(jobs) concurrently using a fixed number of goroutines(workers) that pulls tasks from a shared channel(acting as queue) without need of creating a new goroutine for each task. Thus efficiently running tasks with less resource usage.</p>
<h2 id="heading-why-worker-pools">Why Worker Pools?</h2>
<p>Worker pool makes it possible to run a huge number of tasks on a limited hardware resource.</p>
<p>For e.g. In case of hitting GET requests concurrently to 100 urls, we can directly run a goroutine for each GET call. It’s simple in golang.</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"fmt"</span>
    <span class="hljs-string">"net/http"</span>
    <span class="hljs-string">"sync"</span>
    <span class="hljs-string">"time"</span>
)

<span class="hljs-keyword">type</span> Result <span class="hljs-keyword">struct</span> {
    Response *http.Response
    Error    error
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">request</span><span class="hljs-params">(client *http.Client, url <span class="hljs-keyword">string</span>, results <span class="hljs-keyword">chan</span>&lt;- Result, wg *sync.WaitGroup)</span></span> {
    <span class="hljs-keyword">defer</span> wg.Done()
    res, err := client.Get(url)
    results &lt;- Result{res, err}
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    urls := []<span class="hljs-keyword">string</span>{
        <span class="hljs-comment">// 100 urls from https://gist.github.com/demersdesigns/4442cd84c1cc6c5ccda9b19eac1ba52b</span>
    }
    results := <span class="hljs-built_in">make</span>(<span class="hljs-keyword">chan</span> Result, <span class="hljs-built_in">len</span>(urls))
    wg := sync.WaitGroup{}
    client := http.Client{
        Timeout: <span class="hljs-number">5</span> * time.Second,
    }

    wg.Add(<span class="hljs-built_in">len</span>(urls))
    <span class="hljs-comment">// starting goroutine for each url</span>
    <span class="hljs-keyword">for</span> _, v := <span class="hljs-keyword">range</span> urls {
        <span class="hljs-keyword">go</span> request(&amp;client, v, results, &amp;wg)
    }

    <span class="hljs-comment">// collecting and printing results</span>
    <span class="hljs-keyword">for</span> i := <span class="hljs-number">0</span>; i &lt; <span class="hljs-built_in">len</span>(urls); i++ {
        res := &lt;-results
        <span class="hljs-keyword">if</span> res.Error != <span class="hljs-literal">nil</span> {
            fmt.Printf(<span class="hljs-string">"error in request: %s\n"</span>, res.Error.Error())
        } <span class="hljs-keyword">else</span> {
            fmt.Printf(<span class="hljs-string">"request success for %s\n"</span>, res.Response.Request.URL)
        }
    }
    wg.Wait()
}
</code></pre>
<p>Here, we have created a <code>urls</code> list that holds 100 url string. Then we make a channel of <code>Result</code> struct that holds the response and error. Then we create a WaitGroup <code>wg</code> to wait for completion of goroutines and an http client with 5 seconds timeout. then we started 100 goroutines using range loop using the <code>request</code> function.</p>
<p>Inside this function, we first defer <code>wg.Done()</code> and hit the GET request. Then the <code>Result</code> is sent to the <code>results</code> channel.</p>
<p>Simultaneously in main program, we loop for <code>urls</code> length and get result from <code>results</code> channel and print them.</p>
<p>This program is fast at first look and also very simple, but if we want to do similar thing on let’s say 1000, 10,000 or even 100,000 URLs? it can easily max out the system resources for a huge dataset at scale.</p>
<p>But it can be improved. This is where, worker pools can help!</p>
<h2 id="heading-worker-pool">Worker Pool</h2>
<p>Worker pool allows us to run fixed number of goroutines, where each goroutine can call multiple GET urls from the shared queue channel.</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"fmt"</span>
    <span class="hljs-string">"net/http"</span>
    <span class="hljs-string">"sync"</span>
    <span class="hljs-string">"time"</span>
)

<span class="hljs-keyword">type</span> Job <span class="hljs-keyword">struct</span> {
    URL <span class="hljs-keyword">string</span>
}

<span class="hljs-keyword">type</span> JobResult <span class="hljs-keyword">struct</span> {
    Response *http.Response
    Error    error
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">workers</span><span class="hljs-params">(client *http.Client, jobs &lt;-<span class="hljs-keyword">chan</span> Job, results <span class="hljs-keyword">chan</span>&lt;- JobResult, wg *sync.WaitGroup)</span></span> {
    <span class="hljs-keyword">for</span> job := <span class="hljs-keyword">range</span> jobs {
        res, err := client.Get(job.URL)
        results &lt;- JobResult{res, err}
        wg.Done()
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    urls := []<span class="hljs-keyword">string</span>{
        <span class="hljs-comment">// 100 urls from https://gist.github.com/demersdesigns/4442cd84c1cc6c5ccda9b19eac1ba52b</span>
    }
    numWorkers := <span class="hljs-number">10</span>
    client := http.Client{
        Timeout: <span class="hljs-number">5</span> * time.Second,
    }

    jobs := <span class="hljs-built_in">make</span>(<span class="hljs-keyword">chan</span> Job, <span class="hljs-built_in">len</span>(urls))
    results := <span class="hljs-built_in">make</span>(<span class="hljs-keyword">chan</span> JobResult, <span class="hljs-built_in">len</span>(urls))
    wg := sync.WaitGroup{}

    <span class="hljs-comment">// starting a fixed number of worker goroutines</span>
    <span class="hljs-keyword">for</span> i := <span class="hljs-number">0</span>; i &lt; numWorkers; i++ {
        <span class="hljs-keyword">go</span> workers(&amp;client, jobs, results, &amp;wg)
    }

    wg.Add(<span class="hljs-built_in">len</span>(urls))
    <span class="hljs-comment">// sending jobs to the job queue</span>
    <span class="hljs-keyword">for</span> i := <span class="hljs-number">0</span>; i &lt; <span class="hljs-built_in">len</span>(urls); i++ {
        jobs &lt;- Job{urls[i]}
    }
    <span class="hljs-built_in">close</span>(jobs)

    <span class="hljs-comment">// collecting and printing results</span>
    <span class="hljs-keyword">for</span> i := <span class="hljs-number">0</span>; i &lt; <span class="hljs-built_in">len</span>(urls); i++ {
        res := &lt;-results
        <span class="hljs-keyword">if</span> res.Error != <span class="hljs-literal">nil</span> {
            fmt.Println(<span class="hljs-string">"error: "</span>, res.Error.Error())
        } <span class="hljs-keyword">else</span> {
            fmt.Println(res.Response.Request.URL)
        }
    }
    wg.Wait()
}
</code></pre>
<p>In this,</p>
<p>we have created a new struct <code>Job</code> to demonstrate a task/job that holds the url.</p>
<p>We used a buffered channel <code>jobs</code> of <code>Job</code> with <code>numWorkers</code> capacity that specifies total workers we want to deploy. This channel will be our shared queue from which the worker goroutines will pick their task.</p>
<p>Then we deploy exactly <code>numWorkers</code> number of workers as goroutines using <code>workers</code> function. Inside this function, we use range over the <code>jobs</code> channel that picks tasks from this channel. On picking a task, it makes the GET call, puts the <code>Result</code> in <code>results</code> channel and completes with <code>wg.Done()</code></p>
<p>Then, we put counter to our WaitGroup and start putting jobs to our <code>jobs</code> channel by looping over <code>urls</code> list. The we close the channel using <code>close(jobs)</code> to signal the workers that there are no more tasks and they can exit the range loop</p>
<p>After this, we get results using a loop from <code>results</code> channel and print to the console.</p>
<p>Here, inside the <code>workers</code> function, we loop over the <code>jobs</code> channel and pick the available job/task in this channel. As the workers goes on picking a task from this channel, it keeps on reducing and eventually all of the tasks gets picked and completed. This works like when worker1 pick task1, worker2 picks task2. Then worker1 can go to pick task3 and thus each worker can eventually execute multiple tasks.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>So, instead of executing 1 task per goroutine, we can execute multiple tasks from a single goroutine. This is the main purpose of using a worker pool!</p>
<p>In the given examples, we are running total of 10 worker goroutines to call 100 GET calls using a worker pool, which is much efficient than previous example that spins 100 goroutines for same data.</p>
<hr />
<p>If you find this article helpful, don't forget to hit the ❤️ button.</p>
<p>Check out my website <a target="_blank" href="https://anikety.com/"><strong>here</strong></a> and feel free to connect.</p>
<p>Happy Coding! 👨‍💻</p>
]]></content:encoded></item><item><title><![CDATA[Very basic Gorilla Mux tutorial in Go]]></title><description><![CDATA[Introduction
Hello DEV fam, today I will be showing you how to create a basic REST API in Go language using Gorilla mux package with very simple steps, so even if your are very new to this amazing language you can follow this tutorial without any hes...]]></description><link>https://blog.anikety.com/very-basic-gorilla-mux-tutorial-in-go</link><guid isPermaLink="true">https://blog.anikety.com/very-basic-gorilla-mux-tutorial-in-go</guid><category><![CDATA[Go Language]]></category><category><![CDATA[golang]]></category><category><![CDATA[APIs]]></category><category><![CDATA[API basics ]]></category><category><![CDATA[api]]></category><dc:creator><![CDATA[Aniket Yadav]]></dc:creator><pubDate>Tue, 09 Aug 2022 06:41:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1660027033613/mZI5_D0o3.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Hello DEV fam, today I will be showing you how to create a basic REST API in Go language using <a target="_blank" href="https://github.com/gorilla/mux">Gorilla mux</a> package with very simple steps, so even if your are very new to this amazing language you can follow this tutorial without any hesitation and problems. So now let's get right into it!  </p>
<h2 id="heading-what-are-we-doing-today">What are we doing today?</h2>
<p>We will be creating a very very basic REST Api on our localhost to serve various <code>GET</code> <code>POST</code> <code>PUT</code> <code>DELETE</code> requests.<br />As this is going to be a beginner's tutorial to Gorilla Mux package, so we will not be looking into any of the database queries. We are going to use a sample data collection to assume it as a data coming from some database.<br />So without any delays let's get started.  </p>
<h2 id="heading-any-pre-requisites">Any Pre-Requisites?</h2>
<p>But <em>wait wait wait</em>...  You may ask what should I know in advance to follow this tutorial?<br />Hmm... Nothing fancy, just basics of Go language and that's it!<br />Else I will be explaining you about using &amp; designing basic Crud REST API. </p>
<h2 id="heading-lets-see-what-we-will-have-in-the-end">Let's see, what we will have in the end</h2>
<p>We are going to create a simple application where,<br /><strong>Getting</strong> list of users in response to valid <code>GET</code> request at <em>/users</em><br /><strong>Getting</strong> an user in response to valid <code>GET</code> request at <em>/user/{id}</em><br /><strong>Creating</strong> an user in response to valid <code>POST</code> request at <em>/user</em><br /><strong>Updating</strong> an user in response to valid <code>PUT</code> request at <em>/user/{id}</em><br /><strong>Deleting</strong> an user in response to valid <code>DELETE</code> request at <em>/user/{id}</em>  </p>
<h2 id="heading-the-first-step">The first step</h2>
<p>Initialize your project folder by creating a folder where we will be writing our code.
So begin with following commands in your terminal.<br />First to create the folder and another to move into that folder</p>
<pre><code>mkdir go<span class="hljs-operator">-</span>mux<span class="hljs-operator">-</span>tut
cd go<span class="hljs-operator">-</span>mux<span class="hljs-operator">-</span>tut
</code></pre><p>Now initialize the GO modules using a github repo follow up address</p>
<pre><code>go mod init github.com/user_name<span class="hljs-operator">/</span>repo_name
</code></pre><p>Now, it's time to fetch the required Gorilla Mux module into our project using following command<br /><code>mux</code> package - The Gorilla Mux router (also known as "HTTP request multiplexer")</p>
<pre><code>go get <span class="hljs-operator">-</span>u github.com/gorilla<span class="hljs-operator">/</span>mux
</code></pre><h4 id="heading-now-designing-the-project-structure">Now designing the Project structure</h4>
<p>create <code>main.go</code> <code>app.go</code> <code>model.go</code> <code>controller.go</code> in the project folder<br />So our project structure now looks something like this:  </p>
<p>-app.go<br />-controller.go<br />-model.go<br />-main.go<br />-go.mod<br />-go.sum  </p>
<p>Now the basic setup of project is completed, let's get started with coding!  </p>
<h2 id="heading-creating-a-router">Creating a router</h2>
<p>First starting with <code>app.go</code> file, create a new router variable <code>Router</code> using <code>mux.NewRouter()</code> from the mux package, for this <code>import "github.com/gorilla/mux"</code></p>
<pre><code><span class="hljs-keyword">var</span> Router <span class="hljs-operator">=</span> mux.NewRouter()
</code></pre><p>Now let's create a function <code>HandleRoutes()</code> to handle different routes serving on this <code>Router</code></p>
<pre><code>func HandleRoutes() {
    Router.HandleFunc(<span class="hljs-string">"/users"</span>, GetAllUsers).Methods(<span class="hljs-string">"GET"</span>)
    Router.HandleFunc(<span class="hljs-string">"/user/{id}"</span>, GetUser).Methods(<span class="hljs-string">"GET"</span>)
    Router.HandleFunc(<span class="hljs-string">"/user"</span>, CreateUser).Methods(<span class="hljs-string">"POST"</span>)
    Router.HandleFunc(<span class="hljs-string">"/user/{id}"</span>, UpdateUser).Methods(<span class="hljs-string">"PUT"</span>)
    Router.HandleFunc(<span class="hljs-string">"/user/{id}"</span>, DeleteUser).Methods(<span class="hljs-string">"DELETE"</span>)
}
</code></pre><p>Here using <code>Router</code> from the mux package, we create different routes using <code>HandleFunc()</code> function which takes string path and a function of http.ResponseWriter, *http.Response called <code>controllers</code><br />We defined:  </p>
<ul>
<li><em>/users</em> path and assigned it with <code>GetAllUsers</code> controller, with method <code>GET</code> to get all the users  </li>
<li><em>/user/{id}</em> path and assigned it with GetUser controller by passing <strong>id</strong> in path, with method <code>GET</code> to get user with specified id </li>
<li><em>/user</em> path and assigned it with CreateUser controller, with method <code>POST</code> to create a new user. In this we will be passing new user's data in request body</li>
<li><em>/user/{id}</em> path and assigned it with UpdateUser controller, with method <code>PUT</code> to update an existing user and we will be passing the updated data in request body</li>
<li><em>/users/{id}</em> path and assigned it with DeleteUser controller, with method <code>DELETE</code> to delete an user  </li>
</ul>
<h4 id="heading-the-final-appgo-file-looks-like-this">The final <code>app.go</code> file looks like this</h4>
<pre><code><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"github.com/gorilla/mux"</span>
)

<span class="hljs-keyword">var</span> Router = mux.NewRouter()

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">HandleRoutes</span><span class="hljs-params">()</span></span> {
    Router.HandleFunc(<span class="hljs-string">"/users"</span>, GetAllUsers).Methods(<span class="hljs-string">"GET"</span>)
    Router.HandleFunc(<span class="hljs-string">"/user/{id}"</span>, GetUser).Methods(<span class="hljs-string">"GET"</span>)
    Router.HandleFunc(<span class="hljs-string">"/user"</span>, CreateUser).Methods(<span class="hljs-string">"POST"</span>)
    Router.HandleFunc(<span class="hljs-string">"/user/{id}"</span>, UpdateUser).Methods(<span class="hljs-string">"PUT"</span>)
    Router.HandleFunc(<span class="hljs-string">"/user/{id}"</span>, DeleteUser).Methods(<span class="hljs-string">"DELETE"</span>)
}
</code></pre><h2 id="heading-creating-data-model">Creating data model</h2>
<p>To work with Request and serving Responses, we need to first have the data.<br />For this we will first create the <code>User</code> struct</p>
<pre><code><span class="hljs-keyword">type</span> <span class="hljs-keyword">User</span> struct {
    ID        <span class="hljs-type">int</span>    `<span class="hljs-type">json</span>:"id"`
    FirstName string `<span class="hljs-type">json</span>:"firstName"`
    LastName  string `<span class="hljs-type">json</span>:"lastName"`
}
</code></pre><p>This User struct has an <code>ID</code> of type int with it's json name as <strong>id</strong>, <code>FirstName</code> of type string with it's json name as <strong>firstName</strong>, <code>LastName</code> of type string with it's json name as <strong>lastName</strong>  </p>
<p>Now we will create a sample slice of users to work with  </p>
<pre><code>var UsersData = []<span class="hljs-keyword">User</span>{
    <span class="hljs-keyword">User</span>{<span class="hljs-number">1</span>, "Stephanie", "Turner"},
    <span class="hljs-keyword">User</span>{<span class="hljs-number">2</span>, "Anna", "Edmunds"},
    <span class="hljs-keyword">User</span>{<span class="hljs-number">3</span>, "Jan", "Vaughan"},
    <span class="hljs-keyword">User</span>{<span class="hljs-number">4</span>, "Grace", "North"},
    <span class="hljs-keyword">User</span>{<span class="hljs-number">5</span>, "Piers", "Morrison"},
}
</code></pre><h4 id="heading-the-final-modelgo-file-looks-like-this">The final <code>model.go</code> file looks like this</h4>
<pre><code>package main

<span class="hljs-keyword">type</span> <span class="hljs-keyword">User</span> struct {
    ID        <span class="hljs-type">int</span>    `<span class="hljs-type">json</span>:"id"`
    FirstName string `<span class="hljs-type">json</span>:"firstName"`
    LastName  string `<span class="hljs-type">json</span>:"lastName"`
}

var UsersData = []<span class="hljs-keyword">User</span>{
    <span class="hljs-keyword">User</span>{<span class="hljs-number">1</span>, "Stephanie", "Turner"},
    <span class="hljs-keyword">User</span>{<span class="hljs-number">2</span>, "Anna", "Edmunds"},
    <span class="hljs-keyword">User</span>{<span class="hljs-number">3</span>, "Jan", "Vaughan"},
    <span class="hljs-keyword">User</span>{<span class="hljs-number">4</span>, "Grace", "North"},
    <span class="hljs-keyword">User</span>{<span class="hljs-number">5</span>, "Piers", "Morrison"},
}
</code></pre><h2 id="heading-creating-the-controllers">Creating the controllers</h2>
<p>Now comes the important part, i.e creating the controllers which are the functions responsible for serving different paths of our simple API.  </p>
<p>We're first required to import some packages, which are <code>encoding/json</code>, <code>math/rand</code>, <code>net/http</code>, <code>strconv</code>, <code>github.com/gorilla/mux</code><br />Usage of each package will be explained further  </p>
<pre><code><span class="hljs-keyword">import</span> (
    <span class="hljs-string">"encoding/json"</span>
    <span class="hljs-string">"math/rand"</span>
    <span class="hljs-string">"net/http"</span>
    <span class="hljs-string">"strconv"</span>

    <span class="hljs-string">"github.com/gorilla/mux"</span>
)
</code></pre><h4 id="heading-starting-with-the-simplest-getallusers-function-to-return-all-users">Starting with the simplest, <code>GetAllUsers</code> Function to return all users</h4>
<pre><code>func GetAllUsers(w http.ResponseWriter, r <span class="hljs-operator">*</span>http.Request) {
    w.Header().Set(<span class="hljs-string">"Content-Type"</span>, <span class="hljs-string">"application/json"</span>)
    json.NewEncoder(w).Encode(UsersData)
}
</code></pre><p>Let's understand each component of this function,<br />we have two parameters here, <code>w http.ResponseWriter</code> which has a Write method which accepts a byte slice and writes the data to the connection as part of an HTTP response and <code>r *http.Request</code>  which deals with the request stuff.<br />We then define the type of Data we are sending using the <code>Content-Type</code> parameter in response's header and setting it as <code>application/json</code> to send json type data<br />Then finally we send the response by encoding the data to json format  </p>
<h4 id="heading-now-creating-getuser-function-to-send-an-user-with-specific-id">Now creating <code>GetUser</code> Function to send an user with specific id</h4>
<pre><code>func GetUser(w http.ResponseWriter, r <span class="hljs-operator">*</span>http.Request) {
    w.Header().Set(<span class="hljs-string">"Content-Type"</span>, <span class="hljs-string">"application/json"</span>)

    vars :<span class="hljs-operator">=</span> mux.Vars(r)
    paramId, <span class="hljs-keyword">_</span> :<span class="hljs-operator">=</span> strconv.Atoi(vars[<span class="hljs-string">"id"</span>])

    <span class="hljs-keyword">for</span> <span class="hljs-keyword">_</span>, user :<span class="hljs-operator">=</span> range UsersData {
        <span class="hljs-keyword">if</span> user.ID <span class="hljs-operator">=</span><span class="hljs-operator">=</span> paramId {
            json.NewEncoder(w).Encode(user)
            <span class="hljs-keyword">return</span>
        }
    }
}
</code></pre><p>In this, we first get the map[string]string of parameters from the request <code>r</code> using <code>mux.Vars(r)</code>
As in our case, we need the id as int type, so we then get the <code>id</code> from <code>vars</code> map and then convert it to int type using <code>strconv</code> package.<br />Then we run a loop over our UsersData and find the user with this id, then return it in response.  </p>
<h4 id="heading-working-with-createuser-function-to-create-a-new-user">Working with <code>CreateUser</code> Function to create a new user</h4>
<pre><code>func CreateUser(w http.ResponseWriter, r <span class="hljs-operator">*</span>http.Request) {
    w.Header().Set(<span class="hljs-string">"Conent-Type"</span>, <span class="hljs-string">"application/json"</span>)

    <span class="hljs-keyword">var</span> user User
    user.ID <span class="hljs-operator">=</span> rand.Intn(<span class="hljs-number">10000</span>)

    json.NewDecoder(r.Body).Decode(<span class="hljs-operator">&amp;</span>user)
    UsersData <span class="hljs-operator">=</span> append(UsersData, user)

    <span class="hljs-built_in">msg</span> :<span class="hljs-operator">=</span> <span class="hljs-string">"User created successfully"</span>
    json.NewEncoder(w).Encode(ResponseMsg{<span class="hljs-built_in">msg</span>, user})
}
</code></pre><p>In this, we first created a new user and assigned it with a random id within a range of 10000 using math/rand package.<br />Then we access the data we got in request body and decode it from json format to our User struct type and store it in <code>user</code> variable.  Then finally we store this received user into our UsersData slice.
To send a response also with message, we have created another struct called <code>ResponseMsg</code> , which has a message and user data to send</p>
<pre><code><span class="hljs-keyword">type</span> ResponseMsg struct {
    Msg  string `<span class="hljs-type">json</span>:"message"`
    <span class="hljs-keyword">User</span> <span class="hljs-keyword">User</span>   `<span class="hljs-type">json</span>:"user"`
}
</code></pre><p>Then we send this response with a message of <em>"User created successfully"</em> and created user's data</p>
<h4 id="heading-deleting-user-with-deleteuser-function">Deleting user with <code>DeleteUser</code> Function</h4>
<pre><code>func DeleteUser(w http.ResponseWriter, r <span class="hljs-operator">*</span>http.Request) {
    w.Header().Set(<span class="hljs-string">"Conent-Type"</span>, <span class="hljs-string">"application/json"</span>)

    vars :<span class="hljs-operator">=</span> mux.Vars(r)
    paramId, <span class="hljs-keyword">_</span> :<span class="hljs-operator">=</span> strconv.Atoi(vars[<span class="hljs-string">"id"</span>])

    <span class="hljs-keyword">for</span> index, user :<span class="hljs-operator">=</span> range UsersData {
        <span class="hljs-keyword">if</span> user.ID <span class="hljs-operator">=</span><span class="hljs-operator">=</span> paramId {
            UsersData <span class="hljs-operator">=</span> append(UsersData[:index], UsersData[index<span class="hljs-operator">+</span><span class="hljs-number">1</span>:]...)
        }
    }

    <span class="hljs-built_in">msg</span> :<span class="hljs-operator">=</span> <span class="hljs-string">"User deleted successfully"</span>
    json.NewEncoder(w).Encode(<span class="hljs-built_in">msg</span>)
}
</code></pre><p>In this we have mentioned the header's Content-Type and got the id from requested path as we did earlier.<br />Then we ran a loop over the User's data to find the required user by comparing user's id with passed id.<br />When we get the required user, now we need to remove i.e delete this user from our UsersData slice.<br />To do so, we apply slicing using <code>append()</code> function by first cutting the original slice from 0th index upto that user's index , and then combining it with rest of the slice from user's index+1th position until last position.<br />Then finally we send a response with a message of <em>"User deleted successfully"</em></p>
<h4 id="heading-working-with-updateuser-function-to-update-an-user">Working with <code>UpdateUser</code> Function, to update an user</h4>
<pre><code>func UpdateUser(w http.ResponseWriter, r <span class="hljs-operator">*</span>http.Request) {
    w.Header().Set(<span class="hljs-string">"Conent-Type"</span>, <span class="hljs-string">"application/json"</span>)

    vars :<span class="hljs-operator">=</span> mux.Vars(r)
    paramId, <span class="hljs-keyword">_</span> :<span class="hljs-operator">=</span> strconv.Atoi(vars[<span class="hljs-string">"id"</span>])

    <span class="hljs-keyword">for</span> index, user :<span class="hljs-operator">=</span> range UsersData {
        <span class="hljs-keyword">if</span> user.ID <span class="hljs-operator">=</span><span class="hljs-operator">=</span> paramId {
            UsersData <span class="hljs-operator">=</span> append(UsersData[:index], UsersData[index<span class="hljs-operator">+</span><span class="hljs-number">1</span>:]...)
        }
    }

    <span class="hljs-keyword">var</span> user User
    user.ID <span class="hljs-operator">=</span> paramId
    json.NewDecoder(r.Body).Decode(<span class="hljs-operator">&amp;</span>user)
    UsersData <span class="hljs-operator">=</span> append(UsersData, user)

    <span class="hljs-built_in">msg</span> :<span class="hljs-operator">=</span> <span class="hljs-string">"User updated successfully"</span>
    json.NewEncoder(w).Encode(ResponseMsg{<span class="hljs-built_in">msg</span>, user})
}
</code></pre><p>This has a little complex logic... No worries, let's understand it line by line.<br />So first we defined the header and got the id from request path as we have done earlier.<br />After that we ran a loop over the UsersData to find the user with that id, and here we first delete this user using above <code>DeleteUser</code>'s logic.<br />Now the old id's user is deleted, we then create a new user with that same id and then add it to the UsersData slice.<br />In this manner, we have successfully updated the user.<br />We then send a response with a message saying <em>"User updated successfully"</em> along with the updated user's data.  </p>
<h4 id="heading-the-final-controllergo-file-look-like-this">The final <code>controller.go</code> file look like this</h4>
<pre><code>package main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"encoding/json"</span>
    <span class="hljs-string">"math/rand"</span>
    <span class="hljs-string">"net/http"</span>
    <span class="hljs-string">"strconv"</span>

    <span class="hljs-string">"github.com/gorilla/mux"</span>
)

<span class="hljs-title">func</span> <span class="hljs-title">GetAllUsers</span>(<span class="hljs-title">w</span> <span class="hljs-title">http</span>.<span class="hljs-title">ResponseWriter</span>, <span class="hljs-title">r</span> <span class="hljs-operator">*</span><span class="hljs-title">http</span>.<span class="hljs-title">Request</span>) {
    <span class="hljs-title">w</span>.<span class="hljs-title">Header</span>().<span class="hljs-title">Set</span>(<span class="hljs-string">"Content-Type"</span>, <span class="hljs-string">"application/json"</span>)
    <span class="hljs-title">json</span>.<span class="hljs-title">NewEncoder</span>(<span class="hljs-title">w</span>).<span class="hljs-title">Encode</span>(<span class="hljs-title">UsersData</span>)
}

<span class="hljs-title">func</span> <span class="hljs-title">GetUser</span>(<span class="hljs-title">w</span> <span class="hljs-title">http</span>.<span class="hljs-title">ResponseWriter</span>, <span class="hljs-title">r</span> <span class="hljs-operator">*</span><span class="hljs-title">http</span>.<span class="hljs-title">Request</span>) {
    <span class="hljs-title">w</span>.<span class="hljs-title">Header</span>().<span class="hljs-title">Set</span>(<span class="hljs-string">"Content-Type"</span>, <span class="hljs-string">"application/json"</span>)

    <span class="hljs-title">vars</span> :<span class="hljs-operator">=</span> <span class="hljs-title">mux</span>.<span class="hljs-title">Vars</span>(<span class="hljs-title">r</span>)
    <span class="hljs-title">paramId</span>, <span class="hljs-title"><span class="hljs-keyword">_</span></span> :<span class="hljs-operator">=</span> <span class="hljs-title">strconv</span>.<span class="hljs-title">Atoi</span>(<span class="hljs-title">vars</span>[<span class="hljs-string">"id"</span>])

    <span class="hljs-title"><span class="hljs-keyword">for</span></span> <span class="hljs-title"><span class="hljs-keyword">_</span></span>, <span class="hljs-title">user</span> :<span class="hljs-operator">=</span> <span class="hljs-title">range</span> <span class="hljs-title">UsersData</span> {
        <span class="hljs-title"><span class="hljs-keyword">if</span></span> <span class="hljs-title">user</span>.<span class="hljs-title">ID</span> <span class="hljs-operator">=</span><span class="hljs-operator">=</span> <span class="hljs-title">paramId</span> {
            <span class="hljs-title">json</span>.<span class="hljs-title">NewEncoder</span>(<span class="hljs-title">w</span>).<span class="hljs-title">Encode</span>(<span class="hljs-title">user</span>)
            <span class="hljs-title"><span class="hljs-keyword">return</span></span>
        }
    }
}

<span class="hljs-title"><span class="hljs-keyword">type</span></span> <span class="hljs-title">ResponseMsg</span> <span class="hljs-title"><span class="hljs-keyword">struct</span></span> {
    <span class="hljs-title">Msg</span>  <span class="hljs-title"><span class="hljs-keyword">string</span></span> `<span class="hljs-title">json</span>:<span class="hljs-string">"message"</span>`
    <span class="hljs-title">User</span> <span class="hljs-title">User</span>   `<span class="hljs-title">json</span>:<span class="hljs-string">"user"</span>`
}

<span class="hljs-title">func</span> <span class="hljs-title">CreateUser</span>(<span class="hljs-title">w</span> <span class="hljs-title">http</span>.<span class="hljs-title">ResponseWriter</span>, <span class="hljs-title">r</span> <span class="hljs-operator">*</span><span class="hljs-title">http</span>.<span class="hljs-title">Request</span>) {
    <span class="hljs-title">w</span>.<span class="hljs-title">Header</span>().<span class="hljs-title">Set</span>(<span class="hljs-string">"Conent-Type"</span>, <span class="hljs-string">"application/json"</span>)

    <span class="hljs-title"><span class="hljs-keyword">var</span></span> <span class="hljs-title">user</span> <span class="hljs-title">User</span>
    <span class="hljs-title">user</span>.<span class="hljs-title">ID</span> <span class="hljs-operator">=</span> <span class="hljs-title">rand</span>.<span class="hljs-title">Intn</span>(10000)

    <span class="hljs-title">json</span>.<span class="hljs-title">NewDecoder</span>(<span class="hljs-title">r</span>.<span class="hljs-title">Body</span>).<span class="hljs-title">Decode</span>(<span class="hljs-operator">&amp;</span><span class="hljs-title">user</span>)
    <span class="hljs-title">UsersData</span> <span class="hljs-operator">=</span> <span class="hljs-title">append</span>(<span class="hljs-title">UsersData</span>, <span class="hljs-title">user</span>)

    <span class="hljs-title"><span class="hljs-built_in">msg</span></span> :<span class="hljs-operator">=</span> <span class="hljs-string">"User created successfully"</span>
    <span class="hljs-title">json</span>.<span class="hljs-title">NewEncoder</span>(<span class="hljs-title">w</span>).<span class="hljs-title">Encode</span>(<span class="hljs-title">ResponseMsg</span>{<span class="hljs-title"><span class="hljs-built_in">msg</span></span>, <span class="hljs-title">user</span>})
}

<span class="hljs-title">func</span> <span class="hljs-title">UpdateUser</span>(<span class="hljs-title">w</span> <span class="hljs-title">http</span>.<span class="hljs-title">ResponseWriter</span>, <span class="hljs-title">r</span> <span class="hljs-operator">*</span><span class="hljs-title">http</span>.<span class="hljs-title">Request</span>) {
    <span class="hljs-title">w</span>.<span class="hljs-title">Header</span>().<span class="hljs-title">Set</span>(<span class="hljs-string">"Conent-Type"</span>, <span class="hljs-string">"application/json"</span>)

    <span class="hljs-title">vars</span> :<span class="hljs-operator">=</span> <span class="hljs-title">mux</span>.<span class="hljs-title">Vars</span>(<span class="hljs-title">r</span>)
    <span class="hljs-title">paramId</span>, <span class="hljs-title"><span class="hljs-keyword">_</span></span> :<span class="hljs-operator">=</span> <span class="hljs-title">strconv</span>.<span class="hljs-title">Atoi</span>(<span class="hljs-title">vars</span>[<span class="hljs-string">"id"</span>])

    <span class="hljs-title"><span class="hljs-keyword">for</span></span> <span class="hljs-title">index</span>, <span class="hljs-title">user</span> :<span class="hljs-operator">=</span> <span class="hljs-title">range</span> <span class="hljs-title">UsersData</span> {
        <span class="hljs-title"><span class="hljs-keyword">if</span></span> <span class="hljs-title">user</span>.<span class="hljs-title">ID</span> <span class="hljs-operator">=</span><span class="hljs-operator">=</span> <span class="hljs-title">paramId</span> {
            <span class="hljs-title">UsersData</span> <span class="hljs-operator">=</span> <span class="hljs-title">append</span>(<span class="hljs-title">UsersData</span>[:<span class="hljs-title">index</span>], <span class="hljs-title">UsersData</span>[<span class="hljs-title">index</span><span class="hljs-operator">+</span>1:]...)
        }
    }

    <span class="hljs-title"><span class="hljs-keyword">var</span></span> <span class="hljs-title">user</span> <span class="hljs-title">User</span>
    <span class="hljs-title">user</span>.<span class="hljs-title">ID</span> <span class="hljs-operator">=</span> <span class="hljs-title">paramId</span>
    <span class="hljs-title">json</span>.<span class="hljs-title">NewDecoder</span>(<span class="hljs-title">r</span>.<span class="hljs-title">Body</span>).<span class="hljs-title">Decode</span>(<span class="hljs-operator">&amp;</span><span class="hljs-title">user</span>)
    <span class="hljs-title">UsersData</span> <span class="hljs-operator">=</span> <span class="hljs-title">append</span>(<span class="hljs-title">UsersData</span>, <span class="hljs-title">user</span>)

    <span class="hljs-title"><span class="hljs-built_in">msg</span></span> :<span class="hljs-operator">=</span> <span class="hljs-string">"User updated successfully"</span>
    <span class="hljs-title">json</span>.<span class="hljs-title">NewEncoder</span>(<span class="hljs-title">w</span>).<span class="hljs-title">Encode</span>(<span class="hljs-title">ResponseMsg</span>{<span class="hljs-title"><span class="hljs-built_in">msg</span></span>, <span class="hljs-title">user</span>})
}

<span class="hljs-title">func</span> <span class="hljs-title">DeleteUser</span>(<span class="hljs-title">w</span> <span class="hljs-title">http</span>.<span class="hljs-title">ResponseWriter</span>, <span class="hljs-title">r</span> <span class="hljs-operator">*</span><span class="hljs-title">http</span>.<span class="hljs-title">Request</span>) {
    <span class="hljs-title">w</span>.<span class="hljs-title">Header</span>().<span class="hljs-title">Set</span>(<span class="hljs-string">"Conent-Type"</span>, <span class="hljs-string">"application/json"</span>)

    <span class="hljs-title">vars</span> :<span class="hljs-operator">=</span> <span class="hljs-title">mux</span>.<span class="hljs-title">Vars</span>(<span class="hljs-title">r</span>)
    <span class="hljs-title">paramId</span>, <span class="hljs-title"><span class="hljs-keyword">_</span></span> :<span class="hljs-operator">=</span> <span class="hljs-title">strconv</span>.<span class="hljs-title">Atoi</span>(<span class="hljs-title">vars</span>[<span class="hljs-string">"id"</span>])

    <span class="hljs-title"><span class="hljs-keyword">for</span></span> <span class="hljs-title">index</span>, <span class="hljs-title">user</span> :<span class="hljs-operator">=</span> <span class="hljs-title">range</span> <span class="hljs-title">UsersData</span> {
        <span class="hljs-title"><span class="hljs-keyword">if</span></span> <span class="hljs-title">user</span>.<span class="hljs-title">ID</span> <span class="hljs-operator">=</span><span class="hljs-operator">=</span> <span class="hljs-title">paramId</span> {
            <span class="hljs-title">UsersData</span> <span class="hljs-operator">=</span> <span class="hljs-title">append</span>(<span class="hljs-title">UsersData</span>[:<span class="hljs-title">index</span>], <span class="hljs-title">UsersData</span>[<span class="hljs-title">index</span><span class="hljs-operator">+</span>1:]...)
        }
    }

    <span class="hljs-title"><span class="hljs-built_in">msg</span></span> :<span class="hljs-operator">=</span> <span class="hljs-string">"User deleted successfully"</span>
    <span class="hljs-title">json</span>.<span class="hljs-title">NewEncoder</span>(<span class="hljs-title">w</span>).<span class="hljs-title">Encode</span>(<span class="hljs-title"><span class="hljs-built_in">msg</span></span>)
}
</code></pre><p>Now we have only created different components of our API, now it's time to connect all the components in <code>main.go</code> file</p>
<h2 id="heading-connecting-all-the-components-and-starting-our-api">Connecting all the components and starting our API</h2>
<p>Now we need to get the <code>Router</code> from <code>app.go</code> file and call <code>HandleRoutes()</code> function.  </p>
<p>So first let's import the required packages which are <code>fmt</code>, <code>log</code>, <code>net/http</code>  </p>
<pre><code><span class="hljs-keyword">import</span> (
    <span class="hljs-string">"fmt"</span>
    <span class="hljs-string">"log"</span>
    <span class="hljs-string">"net/http"</span>
)
</code></pre><p>To start the server, we need to call the <code>ListenAndServe</code> function from <code>net/http</code> package by passing the port address i.e <code>8000</code> in our case and the router which we have created in <code>app.go</code> file</p>
<pre><code>fmt.Println(<span class="hljs-string">"Server started on port:8000"</span>)

log.Fatal(http.ListenAndServe(<span class="hljs-string">":8000"</span>, r))
</code></pre><h4 id="heading-the-maingo-file-looks-like-this">The <code>main.go</code> file looks like this</h4>
<pre><code><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"fmt"</span>
    <span class="hljs-string">"log"</span>
    <span class="hljs-string">"net/http"</span>
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    r := Router

    HandleRoutes()

    fmt.Println(<span class="hljs-string">"Server started on port:8000"</span>)

    log.Fatal(http.ListenAndServe(<span class="hljs-string">":8000"</span>, r))
}
</code></pre><p><strong>Finally</strong> here we are, done with creating the very basic API using Gorilla Mux in Go.<br />Now it's time to check our API using POSTMAN.</p>
<h2 id="heading-testing-our-api-using-postman">Testing our API using Postman</h2>
<p>To test our api we need to run our <code>main.go</code> file, so enter following command in terminal while being in the project folder</p>
<pre><code><span class="hljs-keyword">go</span> run .
</code></pre><p>You will get output something like this</p>
<blockquote>
<p>Server started on port:8000</p>
</blockquote>
<p>Now your api server has been started on localhost at port 8000 and this can be accessed at <code>"http://localhost:8000"</code></p>
<h4 id="heading-testing-on-postman">Testing on Postman</h4>
<ul>
<li><strong>Get all Users</strong><br />To get all the users, in Postman enter the url <code>"localhost:8000/users"</code> and keeping the methods as <code>GET</code><br />We get the following response:  </li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1660025152671/2tfvS33Cx.png" alt="image.png" class="image--center mx-auto" /></p>
<ul>
<li><strong>Get specific User</strong><br />To get a specific user, in Postman enter the url <code>"localhost:8000/user/3"</code> to get the user with id 3. You can user another id in place of 3.<br />We get the following response:  </li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1660025188511/B8SaBV1kL.png" alt="image.png" class="image--center mx-auto" /></p>
<ul>
<li><strong>Create User</strong><br />To create new user, in Postman enter the url <code>"localhost:8000/user"</code>, then open Header tab and set <code>Content-Type</code> to <code>application/json</code>  </li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1660024682716/94qCQIpKZ.png" alt="image.png" class="image--center mx-auto" /> 
Then create the request body from <code>Body</code> tab  </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1660024728309/fAMPzTRGT.png" alt="image.png" class="image--center mx-auto" /></p>
<p>Now set the method type to <code>POST</code> and hit Send,<br />Then we get a response somthing like this:  </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1660025214421/r1ctjT1A4.png" alt="image.png" class="image--center mx-auto" /></p>
<p>And when you will again run get all user request, you can see the changes  </p>
<ul>
<li><strong>Update User</strong><br />To update user, in Postman enter the url <code>"localhost:8000/user/1"</code> and mention the Header as done above.<br />Now mention the updating information in request body using Body tab:  </li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1660025013474/hfHMGc7VW.png" alt="image.png" class="image--center mx-auto" /></p>
<p>Now set the method as <code>PUT</code> and hit Send<br />So the response will look like:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1660025233763/rY978hK-4.png" alt="image.png" class="image--center mx-auto" /></p>
<ul>
<li><strong>Deleting User</strong><br />To delete user, in Postman enter the url <code>"localhost:8000/user/1"</code> by keeping the method as <code>DELETE</code>
You will get following response:  </li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1660025342472/VF5w65R3B.png" alt="image.png" class="image--center mx-auto" />  </p>
<h2 id="heading-what-we-did-today-conclusion">What we did today - Conclusion</h2>
<p>Gorilla mux is an amazing package to work with and build APIs easily and efficiently. This was a simple example using a sample slice data. But we can use different databases like Relational SQL databases like MySQL, PostgreSQL, etc or NoSQLs like MongoDB, Redis,etc or Cloud databases like dynamoDB, etc.<br />Relax, no need to worry after listening all these advance techs. I will be making detailed tutorials on each just like this so stay tuned and Subscribe to our newsletter and never miss any upcoming articles and Follow me for more.</p>
<p>Here is the link for the Github repository of this project:<br />%[https://github.com/Aniketyadav44/go-mux-tut]</p>
<h4 id="heading-lets-connect">Let's Connect</h4>
<p><a target="_blank" href="https://www.linkedin.com/in/aniani4848/">LinkedIn</a> 
<a target="_blank" href="https://github.com/Aniketyadav44">Github</a>
<a target="_blank" href="https://anikety.netlify.app">Website</a>
<a target="_blank" href="https://twitter.com/AniketY8888">Twitter</a>
<a target="_blank" href="https://instagram.com/anikettsy">Instagram</a></p>
<h4 id="heading-follow-me-for-such-more-detailed-tutorials-and-articles-i-try-to-make-learning-programming-and-development-in-much-fun-and-easier-way-possible">Follow me for such more detailed tutorials and articles. I try to make learning,  programming and development in much fun and easier way possible.</h4>
<h3 id="heading-so-keep-learning-and-enjoy-making-technologies">So Keep Learning, and Enjoy making Technologies!</h3>
<h3 id="heading-peace">Peace!</h3>
]]></content:encoded></item></channel></rss>