<?xml version="1.0" encoding="UTF-8" ?>
  <rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Adilson M. Bacelar Portfolio</title>
        <link>https://www.ambacelar.com</link>
        <description>Portfolio updates and writing by Adilson Bacelar.</description>
        <language>en-gb</language>
        <lastBuildDate>Sun, 05 Apr 2026 00:00:00 GMT</lastBuildDate>
        <atom:link href="https://www.ambacelar.com/rss" rel="self" type="application/rss+xml" />
        <item>
          <title>Are You Richer, or Just More Expensive?</title>
          <link>https://www.ambacelar.com/blog/are-you-richer-or-just-more-expensive</link>
          <guid isPermaLink="true">https://www.ambacelar.com/blog/are-you-richer-or-just-more-expensive</guid>
          <description>GDP measures how much money moves through an economy, not how good life actually is for the people living in it.</description>
          <author>adi@ambacelar.com (Adilson Bacelar)</author>
          <pubDate>Sun, 05 Apr 2026 00:00:00 GMT</pubDate>
          <content:encoded><![CDATA[Eventually, you grow up and realise that GDP, on its own, tells you almost nothing about how good life actually is.

It tells you that a lot of money is moving around. It does not tell you whether ordinary people can afford a home, raise children without financial panic, or live with any real dignity on a normal salary. A city can be globally prestigious, economically powerful, and still offer the average person living there an absolutely miserable deal.

And yet we keep using GDP as shorthand for success. As if a bigger number means a better life. As if "the economy is strong" and "people are doing well" are the same sentence.

They are not.

---

GDP measures output. It measures the total value of goods and services flowing through an economy. What it does not measure, at all, is whether life feels affordable, stable, or humane for the person earning an ordinary wage.

It cannot tell you whether a 30-year-old on a normal salary can rent a flat alone. It cannot tell you whether two working parents can afford childcare without drowning. It says nothing about commute times, housing quality, savings rates, or whether "getting by" requires constant, grinding financial stress.

Here is what it *can* do: it can make an expensive, high-pressure system look like a prosperous one.

Because sometimes a huge GDP simply means that huge amounts of money must circulate for ordinary people to maintain an ordinary life. If rent is brutal, childcare is brutal, healthcare is brutal, and the cost of simply existing in a city keeps climbing, then yes, more money moves through the system. GDP goes up. But that does not mean anyone is thriving. It means the cost of normality has been inflated to the point where large incomes are required just to stand still.

---

The numbers make this painfully clear.

In London, the Trust for London's 2025 Minimum Income Standard found that a single working-age adult in Inner London needs **£54,400 a year** just to reach what researchers consider a minimum acceptable standard of living. Not comfort. Not luxury. The minimum. And rent alone eats **49%** of that budget. Meanwhile, office administrator roles in the City of London average around £36,000. Graduate median salaries fifteen months after finishing university sit at £28,500 nationally. The gap between what London demands and what London pays ordinary people is not a crack. It is a canyon.

Housing? London had the highest house-price-to-earnings ratio in England and Wales in 2025: homes cost **10.6 times** average earnings.

Cross the Atlantic and the picture does not improve. New York City's median household income is $80,483, which sounds robust until you see that average asking rent for a one-bedroom is around $3,595 a month. In San Francisco, the MIT Living Wage Calculator estimates that a single adult with no children needs $32.44 an hour just to meet basic needs. Two working parents with two children? They each need **$48.99 an hour**. Annual childcare costs for that household sit around $54,400. The median value of an owner-occupied home is $1.39 million.

These are rich cities. On paper. For ordinary earners, they are expensive systems demanding expensive incomes, and calling that arrangement "prosperity."

---

So here is the question nobody in the GDP conversation wants to sit with.

You live in London, New York, or San Francisco. You earn an average wage in an average job. Your parents are not rich. You do not own property. There is no family money waiting in the background.

Do you actually live a better life than your rough equivalent in Bangkok, Tokyo, or Chongqing?

Or do you simply live in a place with higher prices, higher stress, and a more globally powerful currency?

Because in Bangkok, estimated monthly living costs excluding rent are about ฿22,255 for a single person. One-bedroom rentals in central areas start around ฿22,000–25,000 a month, with options available from ฿11,500. Numbeo estimates that Bangkok is roughly 53% cheaper than London excluding rent, and that rents are about 73% lower on average. Customer-service roles in Bangkok commonly pay ฿20,000–30,000 a month, and that money stretches in ways that a London salary simply cannot.

In Chongqing, a vast, modern, dramatically vertical Chinese city, estimated monthly costs excluding rent are about ¥3,261 for a single person. Rental listings show one-room options around ¥1,300 a month. Three-bedroom family apartments can be found for ¥2,000–2,500.

Tokyo complicates things in a useful way. It is not cheap. It is not a low-GDP city. But it is a place where ordinary life tends to function more coherently than in London or San Francisco. Rents are serious but not absurd relative to earnings. Public transport works. The cost of food, healthcare, and daily services does not carry the same extractive weight. Tokyo is proof that a high-GDP city does not have to punish its average residents as a condition of existing.

If the answer to the question, "does the person in London or San Francisco obviously live better?", is not a clear yes, then GDP has already failed as a measure of ordinary prosperity.

---

But there is another layer to this, and it is the one that makes the whole thing harder to stomach.

GDP is not just a domestic statistic. It is a geopolitical tool.

When you live inside a high-GDP, strong-currency economy, you gain something beyond a local paycheque: international purchasing power. You can fly abroad. You can book hotels in other countries without flinching. You can study, holiday, invest, or relocate in ways that people from lower-GDP economies often cannot, even when those people live as well as you do at home. Sometimes better.

That is the real obscenity of the GDP illusion.

Someone in Bangkok or Chongqing might have more space, less rent stress, cheaper food, a shorter commute, more time with family, and more money left at the end of the month than someone in London. But the person in London can afford to fly to Bangkok. The person in Bangkok often cannot afford to fly to London, to see, firsthand, that the Londoner is house-sharing with strangers into their mid-thirties, spending half their income on a room, and calling it a career.

Two people. Similar quality of daily life, or one arguably better than the other. But only one of them has the currency that travels. Only one of them is assumed, by default, to belong to the more successful economy.

That is not a measure of civilisational achievement. That is a measure of whose currency projects furthest.

---

None of this means GDP is meaningless. It measures something real. But the moment people start treating it as proof that ordinary life is affordable, secure, or worth the price, that is where the lie begins.

The real question was never how large an economy is. It was always what kind of life an ordinary person can actually build inside it. And right now, some of the largest, most prestigious economies in the world are offering their average citizens a pretty brutal answer.

---

## Sources

### London / UK
- [Trust for London – A Minimum Income Standard for London 2025](https://trustforlondon.org.uk/documents/1027/A_Minimum_Income_Standard_for_London_2025_final.pdf)
- [ONS – Housing affordability in England and Wales: 2025](https://www.ons.gov.uk/peoplepopulationandcommunity/housing/bulletins/housingaffordabilityinenglandandwales/2025)
- [HESA – Graduate Outcomes 2022/23 salary summary](https://www.hesa.ac.uk/news/17-07-2025/sb272-higher-education-graduate-outcomes-statistics/salary)
- [Reed – Average Office Administrator Salary in City of London](https://www.reed.co.uk/average-salary/average-office-administrator-salary-in-city-of-london)

### New York City
- [U.S. Census QuickFacts – New York City](https://www.census.gov/quickfacts/fact/table/newyorkcitynewyork/PST040225)
- [MIT Living Wage Calculator – New York](https://livingwage.mit.edu/states/36)
- [Zillow Rental Manager – New York rental market trends](https://www.zillow.com/rental-manager/market-trends/new-york-ny/)

### San Francisco
- [MIT Living Wage Calculator – San Francisco County](https://livingwage.mit.edu/counties/06075)
- [U.S. Census QuickFacts – San Francisco](https://www.census.gov/quickfacts/fact/table/sanfranciscocitycalifornia/PST040225)
- [Zillow Rental Manager – San Francisco rental market trends](https://www.zillow.com/rental-manager/market-trends/san-francisco-ca/)

### Bangkok
- [Numbeo – Cost of Living in Bangkok](https://www.numbeo.com/cost-of-living/in/Bangkok)
- [JobsDB – Customer Relations Officer salary in Thailand](https://th.jobsdb.com/career-advice/role/customer-relations-officer/salary)
- [DDproperty – Condos for rent in Bangkok](https://www.ddproperty.com/en/condo-for-rent/in-bangkok-th10)

### Tokyo
- [GaijinPot – Average Salary in Tokyo](https://blog.gaijinpot.com/what-is-the-average-salary-in-tokyo/)
- [Numbeo – Cost of Living in Tokyo](https://www.numbeo.com/cost-of-living/in/Tokyo)
- [SUUMO – Tokyo rentals](https://suumo.jp/chintai/tokyo/)

### Chongqing
- [Numbeo – Cost of Living in Chongqing](https://www.numbeo.com/cost-of-living/in/Chongqing)
- [TeamedUp China – Average Salary in Chongqing](https://teamedupchina.com/average-salary-in-chongqing-china/)
- [58.com – Chongqing rentals](https://mob.58.com/zujin/cq-cqftl/)]]></content:encoded>
        </item>
<item>
          <title>Building a Digital Kimbundu Dictionary in the Age of AI</title>
          <link>https://www.ambacelar.com/blog/building-a-digital-kimbundu-dictionary-in-the-age-of-ai</link>
          <guid isPermaLink="true">https://www.ambacelar.com/blog/building-a-digital-kimbundu-dictionary-in-the-age-of-ai</guid>
          <description>How I turned a historical scanned Kimbundu–Portuguese dictionary into a structured lexical corpus, and why digital cultural libraries matter more than ever.</description>
          <author>adi@ambacelar.com (Adilson Bacelar)</author>
          <pubDate>Tue, 10 Mar 2026 00:00:00 GMT</pubDate>
          <content:encoded><![CDATA[## The project started with a simple question

What happens to a language when its best reference materials only exist as scanned books?

That question has been sitting with me for a long time.

Kimbundu is one of the major Bantu languages of Angola. It is not a small language, and it is not culturally insignificant. But if you go looking for serious digital resources, the landscape gets thin fast. The most substantial references live in old printed works, scanned PDFs, and fragments of knowledge that are difficult to search, hard to reuse, and invisible to modern software.

One of those references is a historical **Kimbundu–Portuguese dictionary**. I decided to take it and turn it into something machines and people could both use: a structured, searchable lexical corpus that could power a public dictionary website and, over time, a wider digital cultural library.

That is how **kimbundu.org** began.

## This was never really "just an OCR project"

At the beginning, it is tempting to think a project like this is mostly about OCR. It is not. OCR is the first capture layer. The real problem is structure.

Historical dictionaries are dense, compact, and full of compressed meaning. This source material is printed in a two-column layout, packed with abbreviations, noun-class markers, cross-references, grammatical notes, and weak separators that are obvious to a human reader but ambiguous to a machine. The scan itself introduces its own problems: dust, diacritics, broken ligatures, header bleed, line-wrap artefacts, and column-boundary confusion.

A direct "PDF to text" workflow would have produced something that looked complete while quietly hiding structural errors everywhere.

So I made an early decision that shaped the whole project:

> Do not treat the dictionary as a single OCR task. Treat it as a staged corpus-engineering problem.

## The pipeline I built

The workflow eventually became a twelve-stage pipeline: from PDF acquisition and page rendering, through column segmentation and OCR capture, into deterministic parsing, chunked extraction, and corpus consolidation, then reconstruction across line, column, page, and chunk boundaries, cleanup into a Portuguese-first lexical dataset, a conservative LLM audit, editorial merge, and finally a publication layer for the website.

That architecture now lives explicitly in the project documentation, rather than as tribal knowledge in scripts and terminal history.

What mattered most was that every stage stayed **inspectable**. Raw OCR is preserved. Page and column provenance are retained. Cleanup is separated from semantic enrichment. Audit output is advisory, never destructive. Editorial changes are batched and merged explicitly.

The corpus is digital, and it is **traceable**.

## Why deterministic parsing mattered

Large language models are powerful, but they are not a substitute for structure.

The core parsing and reconstruction pipeline had to be deterministic. I did not want the source of truth to shift every time I reran a script. The system needed to produce stable outputs, preserve evidence, and make it obvious where uncertainty remained.

So the main workflow stayed deliberately conservative:

- raw OCR stayed raw
- parsing extracted only what could be supported structurally
- reconstruction repaired fused entries before semantic cleanup
- cleanup produced a Portuguese-first lexical dataset
- LLMs were used later as auditors, not silent editors

The most useful role for the model was not "rewrite the dictionary for me." It was:

> "Flag suspicious entries, classify likely issues, and suggest where human attention is warranted."

That distinction kept the project honest.

## What the system actually produced

The end state is a layered corpus with multiple explicit outputs: a cleaned corpus, an audited corpus, an editorial working corpus, a final merged version, a slim public corpus, and a site publication bundle.

After editorial recovery, the final merged corpus contains **10,679 entries**, including 376 approved editorial resolutions, 58 entries recovered during editorial work, and 28 dropped false or redundant fragments.

On the website side, the current application serves that public corpus in a dictionary-first interface with landing/search, word pages, alphabetical browse, noun-class pages, and multilingual routing. The project is already a usable public resource, not just a research artefact.

## The hardest part was structural recovery

Historical dictionaries do not stop cleanly at machine-friendly boundaries. Entries bleed into each other across line wraps. A page heading can leak into lexical content. A cross-reference can look like a new definition. A broken scan can make one entry look like three, or three look like one.

Some of the most useful work in the pipeline involved reconstructing entries across line, column, page, and chunk boundaries. This is where the project stopped being "document parsing" and became something closer to **historical corpus reconstruction**.

I now think of the project less as a dictionary site and more as the foundation of a digital cultural archive.

## Why this matters in the age of LMs

There is a broader reason I care about this work.

We are entering a period where more and more people will not search the web in the traditional sense. They will ask language models. They will ask assistants. They will rely on generated summaries, synthetic answers, and systems that compress the world into a handful of probable responses.

Those systems will only know what has been made digitally available to them.

If a language is poorly digitised, its representation in future AI systems will be thin, distorted, or absent. If a culture's best sources remain trapped in scans, inaccessible archives, or hard-to-parse books, future systems will reflect that absence. The problem is not technical. It is civilisational.

A high-quality digital cultural library is infrastructure. It gives learners something trustworthy to build on, researchers something structured to analyse, future tools something real to retrieve, and a community something that preserves voice, memory, and meaning in machine-readable form.

I do not want my children, twenty years from now, to inherit a future where their access to Angolan culture is mediated mainly through whatever random scraps Silicon Valley happened to ingest. I want them to have access to real archives, real voices, and materials built with care and proximity to the people and histories involved.

## Why kimbundu.org is only the beginning

The dictionary matters, but it is not the endpoint. The broader goal is a genuine **digital Kimbundu cultural library**.

That means, over time:

- a modernised Portuguese display layer alongside the historical source text
- English and French translation layers
- grammar resources and noun-class guides
- Kimbundu Bible texts and audio
- stories, proverbs, songs, and oral materials
- linked references between dictionary entries and broader learning resources

The website already reflects that direction: the app is intentionally dictionary-first, but clearly positioned as the beginning of a wider language-preservation platform. That pacing is deliberate. A strong archive should be built on a stable foundation.

## What I learned building it

### Good intermediate artefacts are worth the effort

Debug images, page-level JSON, chunk summaries, corpus reports, audit summaries, editorial manifests -- all of it felt like overhead at first. It wasn't. It was the reason the project remained debuggable as complexity grew.

### Reviewability is a feature

When you work with historical material, being able to explain _why_ a transformation happened matters almost as much as the transformation itself.

### LLMs are best used carefully

The most valuable use of language models here was constrained issue detection and conservative audit suggestion, not freeform rewriting. That boundary protected the corpus from becoming a moving target.

### Structure comes before enrichment

It is tempting to jump into translation, glossing, or more visible product features. But once the underlying structure is unstable, everything on top of it becomes expensive to trust.

## Where I want to take it next

The next phase is about making the corpus more useful without losing fidelity:

- modern Portuguese display forms alongside the original dictionary Portuguese
- future Kimbundu standardisation support as orthographic guidance becomes clearer
- better educational presentation of noun classes and grammar
- deeper links between lexical entries and texts, songs, and stories
- eventually, richer language tools built on top of a stable, inspectable archive

The dictionary was the first hard problem. It will not be the last.

## Final thought

The most important result of this project is not that a historical dictionary became searchable. It is that the path from scan to structured cultural resource is now explicit, auditable, and reusable.

Preservation is not just about keeping artefacts alive. It is about making them legible to the systems that shape future access to knowledge. And increasingly, those systems are no longer shelves or search engines. They are models.

If we want languages like Kimbundu to remain visible in that future, we need to build the libraries now.]]></content:encoded>
        </item>
<item>
          <title>why does everything suck?</title>
          <link>https://www.ambacelar.com/blog/why-does-everything-suck</link>
          <guid isPermaLink="true">https://www.ambacelar.com/blog/why-does-everything-suck</guid>
          <description>The Mathematical Reality Behind the &quot;Enshitification&quot; of Everything</description>
          <author>adi@ambacelar.com (Adilson Bacelar)</author>
          <pubDate>Wed, 30 Oct 2024 00:00:00 GMT</pubDate>
          <content:encoded><![CDATA[**The Mathematical Reality Behind the "Enshitification" of Everything**

Recently, we've witnessed a pervasive decline in the quality and originality of products across various industries—a phenomenon commonly referred to as the "enshitification" of everything. At its core, this issue stems from the sheer size of companies and teams, all of which need to get paid. This financial necessity creates a limited runway for these large entities, forcing them to prioritise profit over innovation.

Consider the astronomical budgets of movies and games today. These projects are often heavily subsidised by venture capital, banking on the hope of future profitability. Such massive undertakings cannot afford to be reactive or experimental. Every decision carries risks that can cost millions within a quarter, just in salaries alone. As a result, any new initiative requires approval from multiple layers of management, culminating in a decision-maker who may have little context about the original proposal.

This decision-maker, facing high stakes and lacking specific insights, tends to rely on what has worked elsewhere. They might mandate that your project incorporates an AI component simply because another project found success with it. Introducing something entirely new is nearly impossible in this environment. Your innovative idea is deemed too risky without data to support its potential success.

So, what can you do?

**Break Free and Create Independently**

Don't confine your creativity to company time. If you're disillusioned by the uninspiring games churned out by large studios, consider developing your own. Whether as a solo indie developer or part of a small team of passionate individuals, you have the freedom to experiment and innovate without the bureaucratic hurdles.

If you're convinced a particular feature will resonate with your target audience, build a minimum viable product (MVP). Showcasing initial sales and demonstrating a steady monthly recurring revenue (MRR) can prove that your idea is worth further investment.

For those who feel that current movies lack heart, take the initiative to produce something fresh and original. By stepping away from corporate constraints, you can create meaningful and unique content.

**The Advantage of Being Small**

Breaking away from the "fiduciary duties" that compel companies to chase profit at all costs is neither simple nor easy, and success is not guaranteed. However, the potential rewards are significant. In software development, for example, you can create an MVP and position yourself as a startup founder. You can offer a product with your unique spin, testing your ideas in the market. Your operational costs will be a fraction of those incurred by giant corporations, allowing you to charge less while potentially making a more significant profit margin. With fewer employees, there's a larger slice of the pie for everyone involved.

**Considerations and Challenges**

- **Regulated Industries**: Be cautious when entering oligarchical industries with heavy regulations and costly red tape. The expenses associated with audits and compliance might hinder the viability of your MVP or limit customer acquisition.

- **Time Investment in Games**: Developing a game, especially one with a compelling story and polished execution, is time-consuming. You'll likely build it in your spare time, and post-launch challenges like marketing, piracy, and distribution can be significant headaches.

- **Collaborative Efforts in Film**: Creating movies or serialised productions comes with its own set of challenges. Unlike software or games, film projects require real-time collaboration with actors, set designers, costume designers, directors, videographers, lighting technicians, screenwriters, and editors. Monetising such a project adds another layer of complexity. While your costs may be significantly lower than those of larger studios, you may face similar profitability challenges, especially if your vision doesn't align with mainstream investor interests.]]></content:encoded>
        </item>
<item>
          <title>From Monoliths to Microservices</title>
          <link>https://www.ambacelar.com/blog/from-monoliths-to-microservices</link>
          <guid isPermaLink="true">https://www.ambacelar.com/blog/from-monoliths-to-microservices</guid>
          <description>**Navigating the Architectural Maze: From Monoliths to Microservices**</description>
          <author>adi@ambacelar.com (Adilson Bacelar)</author>
          <pubDate>Fri, 25 Oct 2024 00:00:00 GMT</pubDate>
          <content:encoded><![CDATA[We've all been there, sitting in a meeting room, whiteboard markers in hand, debating the best architecture for our next big project. The allure of microservices is strong—scalability, flexibility, and the promise of independent deployments. But as seasoned developers and solution architects know, the journey from a monolith to microservices isn't always as straightforward as the tech blogs make it seem.

**The Comfort of the Monolith**

Let's start with the good old monolithic architecture. It's like that reliable old car that, despite lacking the bells and whistles of modern vehicles, gets you from A to B without much fuss. In a monolith, all your application's components—be it the user interface, business logic, or data access layer—live together in one cohesive unit. This setup makes development and deployment a breeze. You can focus on building features without worrying about the complexities of inter-service communication or distributed transactions. The simplicity and effectiveness of the monolith are things to be appreciated.

For solo developers working on side projects or early-stage startups racing against the clock, monoliths are often the way to go. They allow you to move fast, validate ideas, and get your product into users' hands without getting bogged down by architectural overhead.

However, that trusty monolith can feel like a ball and chain as your application grows. Scaling becomes an all-or-nothing game—you can't just scale the needed parts. Deployments become risky because changing one area can impact the entire system. And let's not even get started on the challenges of onboarding new team members to a massive, intertwined codebase.

**The Mirage of the Distributed Monolith**

In an attempt to solve these problems, many teams venture into what they believe is microservices territory but end up creating a distributed monolith instead. It's like trying to modernise your old car by adding new parts, but without reengineering the core—you end up with a Frankenstein's monster that's neither here nor there.

A distributed monolith often arises when teams break the application into separate services but fail to decouple them properly. They might still share a common database or rely heavily on synchronous communication. The result? You've got all the complexities of a distributed system—network latency, versioning issues, complex deployments—but without the benefits of true independence between services.

Organisations often fall into this trap when multiple teams work on different system parts. They split the application into services to avoid stepping on each other's toes. However, these services are still tightly coupled without clear boundaries and autonomy. Changes in one service ripple through others, and coordinated deployments become a nightmare.

**The Promise (and Perils) of Microservices**

Then there's the microservices architecture—the shiny new car everyone's talking about. In theory, it's brilliant. You decompose your application into small, independent services, each responsible for a specific business capability. Teams can develop, deploy, and scale their services independently. If done right, it can lead to increased agility, better fault isolation, and the ability to adopt the best technology stack for each service.

But here's the kicker: microservices introduce a whole new level of operational complexity. You're now dealing with distributed systems, which come with their own set of challenges—network reliability, data consistency, monitoring, and more. Deployments require sophisticated orchestration, and observability becomes crucial because tracing issues across multiple services isn't for the faint-hearted.

For companies that have found their product-market fit and are entering a growth phase, microservices can offer the scalability and flexibility needed to evolve rapidly. However, it's essential to weigh the benefits against the increased complexity and ensure that the organisation is prepared for the shift—not just technologically but also culturally.

**Choosing the Right Architecture for the Right Stage**

So, how do you decide which architectural path to take? It largely depends on where you are in your project's lifecycle.

If you're a solo developer hacking away on a side project, stick with a monolith. There's no need to complicate things. Your focus should be on delivering value quickly and efficiently.

A monolith is often still the best choice for startups in their early stages. Your priority is validating your product, attracting users, and iterating based on feedback. Introducing microservices too early can slow you down and divert resources from building core features.

As your company grows and your application becomes more complex, it's natural to consider breaking apart the monolith. But tread carefully. Transitioning to microservices requires careful planning and a solid understanding of your domain boundaries. It's not just about splitting code; it's about creating genuinely autonomous services that can stand independently.

**Avoiding the Pitfalls**

One of the biggest motivations for moving away from a monolith is to enable multiple teams to work in parallel without stepping on each other's toes. This makes sense, but it's crucial to establish clear service boundaries and ensure teams have the autonomy they need.

Communication between services should be carefully managed. Relying heavily on synchronous calls can create tight coupling, leading you back into distributed monolith territory. Asynchronous messaging and well-defined APIs can help mitigate these risks.

Deployment and monitoring are other areas where non-monolithic solutions can introduce significant challenges. With multiple services, you'll need robust CI/CD pipelines, container orchestration platforms, and comprehensive logging and monitoring solutions. These aren't trivial to set up and require ongoing maintenance.

Moreover, without proper governance, you risk accumulating technical debt. Inconsistent coding standards, duplicated efforts across services, and neglected documentation can make your system harder to maintain over time.

**The Cultural Shift**

Moving to microservices isn't just a technical change; it's a cultural one. Teams must embrace DevOps practices, take ownership of their services, and collaborate effectively. The organisation must be ready to invest in training and tooling to support this new way of working.

**In Conclusion**

Architecture isn't a one-size-fits-all solution. It's about choosing the right tool for the job at hand. Monoliths aren't inherently bad, and microservices aren't a silver bullet. Each has its place; the key is understanding the trade-offs involved.

Before diving headfirst into microservices, consider whether your organisation is ready for their complexities. Assess your team's capabilities, the nature of your application, and the real benefits you'll gain.

Remember, having a well-structured monolith is better than a poorly executed microservice architecture. Avoid the allure of the distributed monolith by ensuring that any move towards microservices is deliberate, well-planned, and accompanied by the necessary cultural and operational changes.]]></content:encoded>
        </item>
<item>
          <title>We Are Too Big for TODOs: Why Code Annotations Aren&apos;t Enough in Large Projects</title>
          <link>https://www.ambacelar.com/blog/too-big-to-use-todo</link>
          <guid isPermaLink="true">https://www.ambacelar.com/blog/too-big-to-use-todo</guid>
          <description>Why Code Annotations Aren&apos;t Enough in Large Projects</description>
          <author>adi@ambacelar.com (Adilson Bacelar)</author>
          <pubDate>Mon, 30 Sep 2024 00:00:00 GMT</pubDate>
          <content:encoded><![CDATA[Recently, I faced a challenge at work that underscored this issue. Our codebase had accumulated many `FIXME` and `TODO` annotations: 71 `TODO`s, to be exact. Many of these were outdated or lacked context, making them more of a hindrance than a help. We clearly needed a better system for managing technical debt and pending tasks.

## The Limitations of Code Annotations in Large Teams

One of the primary issues with in-code annotations is their invisibility to non-developer stakeholders like Product Owners (POs) and Business Analysts (BAs). These annotations reside within the codebase and are inaccessible to those who help shape the project's direction. This disconnect means essential tasks might be overlooked in planning and prioritisation sessions, leading to technical debt accumulating unnoticed.

In a large codebase, annotations can be easily buried and forgotten. Developers might not revisit a particular file or module for months when the `TODO` or `FIXME` remains unaddressed. This delay can cause minor issues to escalate into significant problems, affecting the project's overall health.

Modern development teams often use robust project management tools to track tasks, bugs, and feature requests. Relying on code annotations bypasses these systems, creating parallel tracks of work that aren't integrated with the team's workflow. This misalignment can lead to confusion, duplicated efforts, or tasks slipping through the cracks.

## Personal Coding Practices vs. Team Dynamics

When working on personal or small-scale projects, I've found value in using a variety of code annotations:

- **`FIXME`**: This is for code that's broken and needs immediate attention.
- **`TODO`**: For future enhancements, optimisations, or refactorings.
- **`XXX`**: For areas that require more thought and might be problematic.
- **`HACK`**: For temporary solutions that aren't ideal but work for now.
- **`NOTE`**: This is for meta-comments or reminders that need out-of-code context.
- **`DOCME`**: For sections that require documentation.

These annotations serve as quick reminders and help me navigate my code effectively. However, this approach relies on my personal oversight and the fact that I'm intimately familiar with every part of the project.

This method must scale better in a team setting, especially in large projects. Not every team member will understand the context behind each annotation; over time, the accumulation can lead to confusion rather than clarity.

## A Better Approach: Leveraging Project Management Tools

I advocate for transitioning from in-code annotations to using dedicated project management tools to address these challenges. Here's how we can map standard annotations to more effective practices:

- **`FIXME`**: Create a bug report in the tracking system. This ensures it's visible to the entire team and can be prioritised appropriately.
- **`TODO`**: Add a task or user story to the backlog. This allows the PO to prioritise it based on the project's goals.
- **`XXX`, `HACK`, `NOTE`**: Engage in discussions during refinement sessions or stand-ups. If the definition of ready isn't met, these concerns should be addressed before coding begins.

Formalising these tasks ensures they're visible to all stakeholders and integrated into the team's workflow. This approach promotes better communication, more accurate prioritisation, and a clearer understanding of the project's status.

As projects grow, so does the need for robust practices that scale with the team and codebase. While code annotations like `TODO` and `FIXME` have their place in smaller projects, relying on them in more extensive settings can lead to inefficiencies and miscommunication.

Transitioning to project management tools for tracking tasks and technical debt offers numerous benefits:

- **Enhanced Visibility**: All team members and stakeholders can see, prioritise, and discuss tasks.
- **Better Organization**: Tasks are tracked in a centralised location, reducing the risk of being forgotten.
- **Improved Collaboration**: Teams can work together more effectively, with clear insights into what needs to be done.

In essence, we were too big to be using `TODO`s in the codebase. By adopting practices that align with our project's scale, we were able to manage our technical debt more effectively and keep our codebase healthier in the long run.]]></content:encoded>
        </item>
<item>
          <title>Between Legacy and Reality: Do I want children?</title>
          <link>https://www.ambacelar.com/blog/between-legacy-and-reality</link>
          <guid isPermaLink="true">https://www.ambacelar.com/blog/between-legacy-and-reality</guid>
          <description>Do I want children? Or do I need children to justify all my past actions?</description>
          <author>adi@ambacelar.com (Adilson Bacelar)</author>
          <pubDate>Mon, 23 Sep 2024 00:00:00 GMT</pubDate>
          <content:encoded><![CDATA[_Today, a friend asked me a simple question: "Do you want children?" Without hesitation, I said yes. But as I went about my day, especially during a long, contemplative shower, I began to meditate on the layers behind that immediate response._

My quick answer stems from a long-standing belief in legacy. I've written before about data and death, pondering what my grandchildren and great-grandchildren might know of me if the platforms I use today disappear tomorrow. The idea of guiding a life isn't just a concept. It's envisioning tiny hands gripping mine, curious eyes looking up as I share the tales of our ancestors. Helping them become better adults than I. I've even taken up hobbies like BBQing with the thought of passing them down as happy memories for my descendants.

As the warm water cascaded over me, the question echoed louder: _"Do I want children?"_ I recalled a tweet in which I joked about expecting to be married with kids by 30. Now, at 31, I realise that if I have my first child at 32, I'll be 50 when they turn 18. Thoughts of being humbled at school sports days crossed my mind, balanced by fantasies of regaining my "fatherly honour" by supporting them in every school endeavour; I will be older, and winning the foot race will be a young father's game. But perhaps my child will enjoy the ski trip I never experienced.

_"Are you making a joke to hide something?"_

The current economy is daunting. My peers and I grapple with an increasingly out-of-reach housing market, skyrocketing living costs, and the weight of financial instability. Many choose pets over children, and some can't afford that solace.

_"Why do you want the house?"_ 

It's not just about property ownership; it's about security, a place to live when I retire, free from the worries of rent during a time when income might be limited.

"You answered that question. Why did you not answer th—"

I envision a future where I can slow down, away from the relentless pace of city life. A spacious home where time is mine to command, where friends and family can visit without the rush to leave. But achieving this under our current system feels like chasing Ecclesiastical smoke (something intangible).

_"Do you only want children under those terms?"_ the voice persists.

Maybe I should put it on my upbringing? As the first-born son of an Angolan refugee family, cultural and familial expectations weigh heavily. I have an implicit duty to carry on traditions and fulfil roles set long before me. That's why I'm investing in learning and preserving my mother tongue, so my children will have a connection to our culture that would be deeper than the one I had. Growing up, stories of my heritage were a cornerstone of family gatherings. I vividly remember my mother's sister and her father visiting us from Angola, teaching my sister and me songs and stories about our culture. The weight of continuing those stories feels both like an honour and a burden.

_"You're a grown adult at 31, too old to blame your upbringing. If you're self-aware enough to notice its influence, why not interrogate why you feel this way?"_

I could chase this dream life by moving outside London, to Portugal, or even back to Angola. But then, I worry about the opportunities available for my potential children. As a Black man, I'm acutely aware that their outcomes will be influenced by the groundwork I lay today.

_"So why don't you work towards that?"_ the inner voice urges.

Because I fear that in seeking my own peace, I might compromise their future.

_"Are you stealing today's joy for a person that doesn't exist yet?"_

Ouch. Maybe I am. They say a wise man plants trees under whose shade he may never sit. Am I postponing my happiness for a future that isn't guaranteed?

Honestly, this has left me with more questions than answers. Do I want children because I genuinely desire them or because it's an internalised expectation?

Can I balance preparing for a possible future and embracing the present?

I'm sure the true answer lies not in the certainty of "yes" or "no" but in understanding the motivations behind my desires and fears. For now, I can only continue to reflect, question, and navigate this complex journey one step at a time.]]></content:encoded>
        </item>
<item>
          <title>The Dangers of AI</title>
          <link>https://www.ambacelar.com/blog/the-dangers-of-ai</link>
          <guid isPermaLink="true">https://www.ambacelar.com/blog/the-dangers-of-ai</guid>
          <description>Amplifying Political Agendas and Racism on Social Media</description>
          <author>adi@ambacelar.com (Adilson Bacelar)</author>
          <pubDate>Wed, 18 Sep 2024 00:00:00 GMT</pubDate>
          <content:encoded><![CDATA[In the ever-evolving social media landscape, the advent of AI has been nothing short of disruptive. It’s changed how we communicate, consume information, and perceive reality. However, as with any tool of great power, AI can be used for both good and harmful purposes, especially in politics. Social media platforms like X (formerly Twitter) are filled with instances where AI is being weaponised to amplify narratives that lean to one political side, often at the expense of minorities and the truth itself.

One of the most insidious examples of this manipulation is the use of 'verified' bot accounts, often controlled by real individuals using large language models (LLMs) to manage several accounts simultaneously. These accounts, masquerading as real users, create an echo chamber that amplifies right-wing content, creating the illusion of widespread agreement with harmful narratives. Their sole purpose is to give a platform to posts from accounts spreading vitriol, often of a racist, xenophobic, or misogynistic nature, thereby granting these voices an undue influence in the online discourse.

Casual racism is nothing new. It’s been a recurring theme on platforms like X. But recently, we’ve seen how AI amplifies these attacks on a massive scale. Take the case of Imane Khelif, the Algerian woman boxer who won gold at the Olympics. From the moment she triumphed over her Italian opponent, a torrent of abuse was directed at her, questioning her identity and womanhood, and AI bots piled on, signalling agreement with hateful posts and keeping the narrative alive for much longer than it would have naturally persisted.

And this is only one example. Recently, right-wing have shifted their focus to Haiti, amplifying a flood of racist misinformation. The target this time? Migrants who are moving into Ohio, a key swing state in the upcoming U.S. elections. The narrative is that these Haitians are criminals, with unfounded and grotesque claims such as they eat pets or hunt wildlife. AI-powered accounts amplify and reinforce these lies, stoking fear and hostility in a population that has already started to violently respond to these narratives.

The purpose is clear: influence the swing state of Ohio ahead of the elections. These narratives are designed to sow fear and anger in local communities, encouraging a climate of distrust and even violence against Haitians. And these lies? They have real-world consequences. Already, people are absorbing this vitriol and acting on these false narratives, consciously or subconsciously.

One of the few silver linings is that users are beginning to identify and expose AI bots. Prompt injection, a technique where users trick AI models into revealing their artificial nature through clever prompts, has become a valuable tool in detecting these malicious actors. But while prompt injections can help root out some bots, many still slip through undetected, spreading their poison unchecked.

The fact that we need such methods to identify fake accounts points to a more significant issue: social media platforms fail to control how AI is being misused to manipulate public discourse. And the stakes couldn’t be higher. As of the time of writing (mid-September 2024), U.S. elections are just around the corner. A former president, deeply aligned with the conservative right, is leveraging AI-powered disinformation to sway public opinion. Swing states like Ohio are becoming battlegrounds, not just for votes but for the very soul of public discourse.

The racist attacks against the people of Haiti aren’t isolated incidents. They’re part of a broader campaign to push fear-based narratives that turn entire communities against one another. Stories about Haitian migrants in Ohio are being twisted beyond recognition: A man legally cleaning roadkill becomes a "Haitian immigrant hunting geese," or a neighbourhood barbecue becomes a rumour of "Haitian families eating dogs."

These may sound like absurd exaggerations, but they are dangerous because of their cumulative effect. Even if someone dismisses a single post as ridiculous, mimetic theory shows us that people begin to internalise these ideas once exposed to repeated messaging. It takes just one piece of confirmation bias for that vitriol to take root. Take this recent article, for example: ["Haitian Driver Makes Illegal Turn in Springfield, OH, Smashes Into Mom's Truck with Autistic Daughter in Back."](https://web.archive.org/web/20240915020002/https://nypost.com/2024/09/13/us-news/haitian-driver-makes-illegal-turn-in-springfield-oh-smashes-into-moms-truck-with-autistic-daughter-in-back/) Just from the headline, assumptions and fears are already being seeded, regardless of whether anyone reads the full article.

How many people stop to fact-check? How many simply scroll past headlines and form impressions based on half-truths or outright lies? And how many of those headlines have been carefully designed to spread fear and bolster political agendas?

It’s easy to think this problem only affects "other people" and that the victims of these AI-driven campaigns are distant from your reality. But make no mistake: the same technology used to target Haitians, Algerians, and other minorities can just as quickly be turned on you. AI is a tool, and like any tool, its purpose depends on the hands that wield it. Today, it’s being used to spread racism and fear. Tomorrow, it could be used to distort facts about your community, your beliefs, or your actions.

We live in a time when fewer people are critical of the platforms providing them with information. AI has made it easier than ever to manufacture consensus, create outrage, and ultimately manipulate society. If alarm bells aren’t already ringing, they should be.

AI brings incredible potential, but without oversight, it is becoming a weapon of disinformation and political manipulation. We must recognise this danger and act before it’s too late. Today, it may be Haitians. Tomorrow, it may be anyone who stands in the way of the political agendas these AI bots are programmed to serve. It’s time to be critical of the content we consume and the platforms we trust. If we don’t, we risk allowing these tools to erode the very foundations of our society.]]></content:encoded>
        </item>
<item>
          <title>AI will not take your tech job</title>
          <link>https://www.ambacelar.com/blog/ai-will-not-take-your-tech-job</link>
          <guid isPermaLink="true">https://www.ambacelar.com/blog/ai-will-not-take-your-tech-job</guid>
          <description>At least, not all of them...</description>
          <author>adi@ambacelar.com (Adilson Bacelar)</author>
          <pubDate>Sat, 14 Sep 2024 00:00:00 GMT</pubDate>
          <content:encoded><![CDATA[At least, not all of them.

Despite the buzz around AI potentially taking over tech jobs, the reality is far less dramatic. The current lack of jobs is more about global economic challenges than any impending AI singularity.

**The Real Reasons Behind the Current Job Market Slump**

_Global Economic Downturn_

The sluggish global economy is the main culprit behind the job shortage. Companies are navigating uncertain financial landscapes, leading to more cautious hiring practices.

_Overhiring During COVID-19_

During the pandemic, many larger companies hired excessively, sometimes even before having the work to assign. The goal was often to prevent competitors from acquiring specific talent. This talent hoarding has led to a surplus of employees without significant work, prompting companies to reassess their staffing needs.

**Changes in the US Tech Industry**

_Talent Hoarding and Legal Shifts_

In the US, companies used to prevent employees from jumping to direct competitors through strict non-compete agreements. Recent legal changes have altered this culture, allowing talent to move more freely between competitors. This shift means companies are less incentivised to hold onto talent they don't immediately need.

_Tax Code Adjustments_

Another factor affecting the market is the change in how companies can account for the cost of developers. Previously, companies could categorise developer expenses under R&D, receiving tax exemptions or lighter tax codes. This advantage has diminished, making it less beneficial to hold onto staff without critical work.

**The UK Job Market Perspective**

_Employment Rates vs. Salaries_

Here in the UK, employment rates are similar to pre-pandemic levels. However, salaries have taken a significant hit. British companies are notoriously tight-fisted, often offering lower salaries unless market forces compel them to spend more.

_Political Ambiguity and Investor Confidence_

Political uncertainty has led to a decline in investor confidence. Investors dislike ambiguity, and the recent political climate has been anything but clear. With the 2024 election concluded and the budget announcement coming in a month's time (October 2024), businesses and investors will soon know where they stand and can strategise accordingly.

_Industry Hiring Trends_

The only industries actively hiring in the UK right now are finance and other large corporations. The stagnation in growth rates has further impacted the availability of tech jobs.

**The Reality of AI in Software Development**

_AI as a Tool for Efficiency_

Any company that can afford to hire developers will continue to do so. If AI can genuinely make developers more efficient, there's no reason for companies to stop hiring. Instead, they can deliver more value with the staff they already have, leveraging AI to enhance productivity.

_Business Incentives_

For example, work that used to take 18 months can now be delivered in 15 months with the aid of AI, a whole quarter gained. Unless they have to, companies are more likely to capitalise on this increased efficiency than reduce their workforce.

**Actionable Advice for Developers**

_Embrace AI Technologies_

My advice? Learn how to work with AI. Look for best practices and find a workflow that integrates AI tools effectively. As of September 2024, the ecosystem is still evolving in integrating Large Language Models (LLMs) into the Software Development Life Cycle (SDLC).

_Stay Ahead of the Curve_

Don't get left behind. Very soon, you'll see job postings expecting familiarity with the prevalent AI tools of the time, whether it's GitHub Copilot, Cursor, or whatever solution the company has adopted.

AI won't take all tech jobs, but it will change how we work. Adaptability is key. By embracing AI and integrating it into your workflow, you position yourself to thrive in the evolving tech landscape.]]></content:encoded>
        </item>
<item>
          <title>Hello, Web3!</title>
          <link>https://www.ambacelar.com/blog/blockchain-intro</link>
          <guid isPermaLink="true">https://www.ambacelar.com/blog/blockchain-intro</guid>
          <description>A friend asked me for an intro, so I will share it with you too</description>
          <author>adi@ambacelar.com (Adilson Bacelar)</author>
          <pubDate>Tue, 20 Aug 2024 00:00:00 GMT</pubDate>
          <content:encoded><![CDATA[# So, you want to create a blockchain product?

Okay fine! Let's start with a definition of the blockchain itself, now, I'm sure you either already know all of this, either through your own self-study, or during your time in the Blockchain conference, but I'll try and cover as much as possible. Please feel free to reach out on anything you would like more information on.

## What is a "blockchain"?

A blockchain is **a distributed database or ledger shared among a network of nodes (computers)**.

That's it.

They are best known for their crucial role in cryptocurrency systems for maintaining a secure and decentralised record of transactions, but they are not limited to cryptocurrency uses. The main _intention_ is to make data **immutable** (the general tech term to describe something with the inability to be altered).

Because the is supposed to be no way to change a block, the only place where trust is needed is at the point where a user or program enters data. This aspect reduced the need for trusted third-parties (and is also in my opinion the whole point of blockchain technology, trustlessness), which are usually auditors or other humans that add costs or mistakes.

## Okay, but how does a blockchain "work"?

I'm sure that you are familiar with spreadsheets (or other types of databases). A blockchain is the same, because it is a place where data is structured, stored and and accessed.

A blockchain consists of programs called scripts that conduct the tasks you usually would in a database: Entering and accessing information and saving and storing it somewhere. A (colloquial) blockchain is distributed, which means multiple copies are saved on many machines, and they must all match for it to be valid.

The blockchain collects transaction information and enters it into a (usually) 4MB file called a block. Once it is full, certain information is run through an encryption algorithm, which creates a hexadecimal number called the block header hash.

The hash is then entered into the following block header and encrypted with the other information in that block's header, creating a chain of blocks.

## Did you say "transacti-

Way ahead of you!

Transactions in Ethereum are cryptographically signed data messages that contain a set of instructions. These instructions can interpret to sending Ether from one Ethereum account to another or interacting with a smart contract deployed on the blockchain. Transactions are a simple but powerful concept that has allowed users worldwide to interact on a decentralised network.

Transactions can get pretty technical, and there's no real need to go too in depth before you decide on a blockchain, but I'll share these few things that are relevant to the Etherium blockchain running the Etherium Virtual Machine (EVM).

**Accounts**

There are two types of accounts, smart contract accounts and externally owned accounts (EOA):

- *Externally owned accounts (EOA)* refer to accounts that humans manage, such as a personal Metamask or Coinbase wallet. This account is identified by a public key (also known as an account address) and is controlled by a private key. The public key is derived from the private key using a cryptographic algorithm. It's important to note that these accounts cannot store information other than your accounts balance and nonce.

- *Smart contract accounts* (also known as contract accounts) also contain an address to balance mapping but differ because they can also include EVM code and storage. Contract accounts control themselves by the logic in the EVM code stored within the account.

Ethereum utilises the elliptic curve digital signature algorithm (ECDSA) to prove authentication (i.e., prove that we have a private key for our public address) and verify that our transaction comes from the account signing the transaction and is not fraudulent.

**Types of Transactions**

Let us tie the information we just learned about accounts with the *different types of transactions*:

- *Message call* transaction: A message call derives from an externally owned account that wants to interact with another EOA or contract account. An example of a message call would be sending Ether from one account to another, or interacting with a smart contract (e.g, swapping tokens on Uniswap).
- *Contract creation* transaction: A contract creation derives from an EOA to create a smart contract account (generally to store code and storage). An example of this type of transaction would be deploying a storage smart contract to store data.

**Transaction States**

- _Pending_: Transactions broadcasted to the network waiting to be mined. If the transaction is taking longer than expected, it's possible that your gas fee is not high enough to meet execution at the current time.
- _Queued_: A transaction that cannot be mined yet due to another pending transaction in the queue first or an out of sequence nonce.
- _Cancelled_: Can no longer be mined. Replaced by a transaction with a higher gas fee, same nonce value, and a null value for the data and/or value field.
- _Replaced_: Can no longer be mined. Used to replace current pending orders for faster execution or modification of values and data. This also consists of using the same nonce as the transaction you want to cancel and a higher gas fee.
- _Failed_: A transaction that resulted in an error due to a revert error, bad instructions, illogical code, or not enough gas to run the remainder of a function call.

### Smart Contracts

A smart contract is computer code that can be built into the blockchain to facilitate transactions. It operates under a set of conditions to which users agree. When those conditions are met, the smart contract conducts the transaction for the users.

### Data Storage

Another significant implication of blockchains is that they require storage. This may not appear to be substantial because we already store lots of information and data. However, as time passes, the number of growing blockchain uses will require more storage, especially on blockchains where nodes store the entire chain.

Currently, data storage is centralised in large centres. But if the world transitions to blockchain for every industry and use, its exponentially growing size would mean more advanced techniques to reduce its size or that any participants would need to continually upgrade their storage.

This could become significantly expensive in terms of both money and physical space needed, as the Bitcoin blockchain itself was more than 575 gigabytes on June 14, 2024—and this blockchain records only bitcoin transactions. This is small compared to the amount of data stored in large data centres, but a growing number of blockchains will only add to the amount of storage already required for the connected and digital world.

## Wait, if the blockchain is immutable, does that mean that things don't change ever?

Yes, and no.
What is immutable are the transactions on the blockchain, you don't get to change the values of the existing transactions, but what you _can_ do is create a new transaction, that can change the value.

### huh?

Okay, let's give an example:

```
Stephanie creates a wallet
```

great, we both have wallet addresses now, however, we have done nothing on the blockchain just yet. But you do have _some_ state to observe.

We look at your wallet value and we will see that your balance is zero:

```
Stephanie wallet value: 0u
```

Okay, so let's fund your wallet

```
Stephanie buys 10u from an on-ramp platform.

Transaction:
	type: message_call
	from: Onramp platform
	to: Stephanie
	value: 10u
```

Nice!

so if we look at what happened, you made a purchase with a onramp platform much like Coinbase, and they transfered `10u` from _their_ wallet to yours. This is a transaction, and this transaction is how stored values are changed!

### What about smart contracts?

Okay, let's look at these!

Contracts in Solidity are similar to classes in object-oriented languages. Each contract can contain declarations of State Variables, Functions, Function Modifiers, Events, Struct Types and Enum Types. Furthermore, contracts can inherit from other contracts. They can even create other contracts! 👀

I'm going to create a basic smart contract based on solidity EVM:

```js
// Specifies the version of Solidity for this smart contract.
pragma solidity >=0.7.3;

contract HelloStephanie {
	// Declares a state variable `message` of type `string`.
	// State variables are variables whose values are permanently stored in contract storage. The keyword `public` makes variables accessible from outside a contract and creates a function that other contracts or clients can call to access the value.
	string public message;

	// Declares a variable for the contract owner, the lack of "public" keyword means that this variable is not accessible from outside the contractr.
	address owner;
	TokenCreator creator;

	// This is the constructor that is run when the smart contract is initialised, it will register the creator and the initial message.
	constructor(string memory initMessage) {
	    // Accepts a string argument `initMessage` and sets the value into the contract's `message` storage variable.
		message = initMessage;
		// sets the value of the address of the smart contract owner, this wallet will have admin permissions over this smart contract (this should be changed when transferring the smart contract)
		owner = msg.sender;
		// The token creator should never change, could be used to ensure royalties are gathered when withdrawing any value from the contract. (Yes, smart contracts can hold onto funds)
		creator = TokenCreator(msg.sender);
	}
	//Emitted when update function is called
	//Smart contract events are a way for your contract to communicate that something happened on the blockchain to your app front-end, which can be 'listening' for certain events and take action when they happen.
	event UpdatedMessages(string oldStr, string newStr);


   // A public function that accepts a string argument and updates the `message` storage variable.
	function changeMessage(string memory newMessage) public {
		// if you aren't the current owner of this smart contract, you cannot change the value of the message, so we exit early.
		if (msg.sender != owner) return;

		string memory oldMsg = message;
		message = newMessage;
		emit UpdatedMessages(oldMsg, newMessage);
	}
}
```

But the above would be the source code to create a smart contract that, when deployed would provide a message to anyone that wishes to look at it, for free!

But only one account can make changes to them. If you want to see an example of a production level smart contract, let's look at the Bored Ape Yacht Club (BAYC) smart contract: found [here](https://etherscan.io/token/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code)
Clicking that link, will take you directly to the source code for the contract, available for all to see!

But that's not the important thing, if you click on the `read contract` tab, you will be able to see all of the `public` variables which are all "immutable" unless you allow them to be updated via another transaction.

(It is common practice to enable a way to "freeze" a contract once all the values are finalised, with similar logic to how we checked for the contract owner)

e.g.

```js
pragma solidity >=0.7.3;

contract CheckFrozen {

	bool public isFrozen;
	string public message;

	address owner;
	TokenCreator creator;

	constructor(string memory initMessage) {
		message = initMessage;
		owner = msg.sender;
		creator = TokenCreator(msg.sender);
	}

	function freezeContract() public {
		// can only be ran once. Cannot be undone.
		if (isFrozen == true) return;
		if (msg.sender != owner) return;

		isFrozen = true;
	}

	function changeMessage(string memory newMessage) public {
		if (isFrozen == true) return;
		if (msg.sender != owner) return;

		string memory oldMsg = message;
		message = newMessage;
	}
}
```

Take note, that every interaction with a smart contract is a `transaction` and therefore will have a transaction fee!

## Okay, what else is there?

I mean, I think that we have covered everything already to get you at the level of the base Web3 developer/project owner.

Other topics to look into, considering the project that you are working on:

- Real World Events triggering a smart contract
- What are the nature of the data that needs to be stored? Do you need Web3 file storage systems? (for things like PDFs?) please take note, that there isn't any _real_ privacy on the blockchain, and if the files are added onto the blockchain, there is no encryption that will endure a concentrated attack attempt for very long. So making careful use of private variables is essential!
  - [IPFS](https://ipfs.io/) is a distributed file system designed to help store and access data across a peer-to-peer network. IPFS empowers developers to store, timestamp, and secure large files without having to put the data itself on-chain.
  - [Arweave](https://www.arweave.org/) takes the idea of decentralized file storage further by ensuring the permanence of data. Arweave is building a permaweb, managed by a global community of users and developers who are incentivized to maintain the data storage layer.
  - [Gaia](https://github.com/stacks-network/gaia) is a decentralized storage platform available to developers building apps on Stacks. The transactional metadata of the apps is stored on the blockchain itself, while user application data is stored in Gaia to ensure that users enjoy high performance and availability.
- Security, for the things that you are trying to so, a high level of assurance and security may be essential. Blockchain will only be as good as the people designing it. And once something exists, even if you make a new transaction to update it please, be aware that the prior version that is potentially compromised will forever exist on the blockchain.]]></content:encoded>
        </item>
<item>
          <title>I will not wear your GPT</title>
          <link>https://www.ambacelar.com/blog/i-will-not-wear-your-gpt</link>
          <guid isPermaLink="true">https://www.ambacelar.com/blog/i-will-not-wear-your-gpt</guid>
          <description>I finished work today and jumped on twitter to see a single payment AI wearable &quot;friend&quot;...</description>
          <author>adi@ambacelar.com (Adilson Bacelar)</author>
          <pubDate>Wed, 31 Jul 2024 00:00:00 GMT</pubDate>
          <content:encoded><![CDATA[Please... let's be serious right now.
friend.com has been making the rounds recently, an AI wearable pendant; it doesn't even seem to be an assistant, just a piece of technology that is always listening to you and will send you a push notification to your mobile phone (only iOS at this time)... that's it.

And they're charging you $99 now for a pre-order, only available in the US and Canada.

So, you're considering pre-ordering this product? Even after seeing the Humane Pin and the Rabbit R1? What's the purpose or benefit that justifies this decision?

The company wishes to enter the hardware space, but the form factor cannot allow for any computation to happen on the device. This is evidenced by the fact that it requires a phone not just for setup but also for operation. This means it's just a fancy microphone with lights and haptics—don't forget the USB-C charging!

From a technical standpoint, this is a company attempting to break into a low-margin industry (hardware) by leveraging a high-cost service (GPT wrapper). But here's the kicker-there's no monthly subscription. This begs the question of how this product plans to survive beyond a few months. It's a red flag that potential buyers should not ignore.

My prediction is that either the product is never released or the value to the company or its investors comes from the "always listening" aspect of the product. We know that our phones and smart speakers forever listen in on us. Still, sadly for advertisers, companies such as Google, Amazon, Apple and "Phone manufacturers" aren't interested in giving that access to everyone else. You can't just install an app onto your phone to spy without the users questioning why that orange dot is on their status bar. So now we need a new way of spying and listening into what you are doing and where. And that's where I believe products like these will come in.

I get it. They claim to protect our privacy: "No audio or transcripts are stored past your friend’s context window." However, as of yesterday, when their terms and conditions were updated, they did not include anything to cover such promises. Which is strange (or not, since the product almost definitely doesn't exist, so why write legal documentation for it?).

They will create and release a relatively limited number of products to keep costs (not just manufacturing, QA and Customer Service but also powering whatever AI service they're running, for speech-to-text, speaker identification and generating a short response... if it's not just all handled by ChatGPT or similar with relatively simple prompt engineering) low and will just quietly fade into the background just like the other AI hardware technology have in the past.

Also, the timing is awful, besides the fact that **they** want to be listening in on you 24/7 even in the bathroom, why is this _not_ a mobile app? At least that way you could deliver your own small model to the devices and have them all run on the native hardware, All phones released in the last year have NPUs and honestly, no one that isn't interested in having an NPU in their phone by Q1 2025 is going to care about your spy pendant.

Would I buy a wearable? What would my list of concerns be?

- Local computing - This matters because without local computing, this is just a piece of hardware making potentially expensive API requests for everything while also spying on me. Zero control, zero ownership, zero use without internet.
- Not easily covered/replaced by my phone—Since I already have a smartphone that knows me well and has seen and heard everything I have, why would I add another piece of tech to carry everywhere? How will this wearable help me remember things without my calendar? Will it assist me with interacting with others without access to my messages?
- Not just an LLM—Please, oh please, oh please... we are tired! The majority of things done or considered by these tools are worse than just googling. I don't want more hallucinations; I want accurate data retrieval and categorisation that can then be expressed in a human-readable format. I do NOT want an autocomplete algorithm to simply try and calculate what an answer to my question is. This is a Limitation to LLMs as we see them right now.

I'm not going to continue that list because I don't imagine myself, who is already deep in the Apple ecosystem, picking up an assistant tool that isn't also part of that ecosystem. Why would I trust a brand new company that has come out of nowhere with access to my emails, messages, contacts, notes and to listen in to my phone calls to be useful as an assistant? It's not reasonable in my eyes.]]></content:encoded>
        </item>
<item>
          <title>On data and death</title>
          <link>https://www.ambacelar.com/blog/on-data-and-death</link>
          <guid isPermaLink="true">https://www.ambacelar.com/blog/on-data-and-death</guid>
          <description>What will my grandchildren learn of me if they can not read this?</description>
          <author>adi@ambacelar.com (Adilson Bacelar)</author>
          <pubDate>Wed, 31 Jul 2024 00:00:00 GMT</pubDate>
          <content:encoded><![CDATA[I "bought" a song on Amazon/Apple Music...
Once upon a time, this would have exclusively been physical media. Now, it's permissible to access digital media only as long as the platform has the licence to distribute it.

Things are broken.

My friends and I were (and still are) very dedicated to a game genre (Fighting Games) and, specifically, a game called (Ultimate) Marvel vs Capcom 3; this game took many years from us, but it was great. However, the relationship between Marvel and Capcom was resolved, so the game was removed from all marketplaces.

No more online purchases, including DLCs.

Even if I did purchase it previously, I can no longer re-download it.

Lost media... at least until Marvel vs. Capcom Infinite was announced/released, and suddenly, the licence was renewed... for now? Even today, the game is reachable on Steam, but Microsoft has just announced that the Xbox 360 marketplace is shutting down.

Before my uncle travelled back to Angola, he passed me a PlayStation One complete with games.
Will the PlayStation Network still exist if I hold onto my PS4 for 15 more years and turn it on? What will still be playable? Will the SSD have deteriorated? What will be left at that point? Is there a _legal_ way in place for me to interact with the content I have paid quite a lot of money for just 20 years later? Even if the SSD is still good, and I have games on physical discs, will they even be playable without the marketplace, considering how many games come out needing day-one patches?

What about movies and music?
Well, it's the same. Many who read this on release would remember that a major music distributor removed their entire content library from TikTok. Others may have also experienced looking at a post on Instagram where the story has been muted because the song (that they selected on the platform itself) had its licence revoked within 24 of the story post.

We consumers are quite literally left to the whims of these licence owners regarding when we can access the things we "buy". This is not ownership, and this needs to be addressed.

The first generation of digital owners will soon pass on, and we need to see what will happen to their assets when they do.
Of course, every company has transferred to a subscription model, but what about products that are still viable today that were purchased with a Perpetual Licence? Does this mean that once I retire, I can pass my account on to my child or my apprentice?

If I've purchased a lot of movies on my Amazon Prime when I reach a certain age, why would I not share my account with my children? I want them to benefit not just from my Prime account but also to have access to the movies and shows I've already purchased on the platform.

The most significant problem is where my "bought" content sits. Today, it's hard to believe that Amazon or YouTube can go down, but there's no guarantee that they will exist in 50 years, let alone 100. We have not thought these things through properly. And we will soon see how much we have ignored when more people pass.

If Instagram truly lasts a century as a product, will our children have access to our accounts as a photo album after we pass? Or will we need a new product to store that data because Meta decides that a dormant account that can't be advertised to is a waste of bandwidth and username?

Will my great-grandchildren be able to read through my thoughts on X (formerly Twitter) whenever they get curious about who I am? Hell, will they be able to read this post if I am not here? If no one pays for my domain anymore? If this server is not protected?

Is this site itself just future lost media? How can I prevent that?

Can crypto be the answer? With the immutability of all content on the blockchain? But I am sure that even the blockchain has a file size limit. It may be time for me to research _how_ the blockchain is stored and on which computers.

I sure don't want to buy an album on Apple Music and not be able to listen to it because I didn't have it downloaded when the license owners revoked access.]]></content:encoded>
        </item>
<item>
          <title>From Frontend to Fullstack: Navigating My Career Path</title>
          <link>https://www.ambacelar.com/blog/career-planning</link>
          <guid isPermaLink="true">https://www.ambacelar.com/blog/career-planning</guid>
          <description>What do I see in my future right now?</description>
          <author>adi@ambacelar.com (Adilson Bacelar)</author>
          <pubDate>Sat, 20 Jul 2024 00:00:00 GMT</pubDate>
          <content:encoded><![CDATA[## Introduction

As a Senior Frontend Developer in London, UK, I'm at a crossroads in my career. Reflecting on my current role and aspirations, I've realised it's time for a change. Here's a snapshot of where I am and the steps I'm considering to reshape my professional path.

## Current Situation

I am a Senior Frontend Developer based in London, UK. I am currently working as the Technical Lead for a team building a greenfield mobile app in React Native, replacing an existing Xamarin app. This role has been my first full-time experience with React Native, presenting new challenges and learning opportunities, such as binding native modules and maintaining performance at 60 fps. However, after mastering these skills, the work has become mundane.

## Reflections on Current Role

Like web development, the novelty wears off, and the backend teams handle all the new business logic. Decision-making in my current role is limited to selecting sprint tickets, which feels unsustainable. This doesn't feel like Software "Engineering"—it's more about using frameworks and libraries to meet business needs.

## Can I see myself doing this for another year?

No.

## Future Prospects

Are there many places that will offer me better opportunities if I remain a frontend React developer? Probably not. The JavaScript ecosystem evolves quickly, but UK companies don't. The "fast-paced environment" often translates to mismanagement and unrealistic goals.

I started in game development, which was engaging and diverse. I may need to evolve to find new challenges interesting again.

## What can I do?

I've taken the opportunity to contribute to the platform team, developing internal developer tooling that has significantly reduced testing cycle times. I've also integrated native GPS, gyroscope, and accelerometer features to simulate values for third-party validation. Additionally, I'm working on tools to speed up core dependency upgrades. These efforts, while meaningful, only partially satisfy my desire for more engaging work.

## But is that it?

No, these contributions make my current role less mundane but not fulfilling. So, what's next?

## Escape

I may need to leave the frontend space. This is challenging because most UK opportunities are in financial institutions, and I'm not keen on becoming a Quantitative Developer. While this might be a possibility, I must explore it further.

## So, what does that journey look like?

A more direct path might be transitioning to a fullstack or software engineer role, contributing across the entire product. Start-ups could be ideal since they often use technologies like Node.js to quickly find product-market fit, unlike established companies with dedicated backend teams.

## And until then?

In the meantime, I can focus on personal projects. I have several ideas for end-to-end products, not just frontend experiments. This site can serve as a hub for these projects.

## Conclusion

Navigating a career shift is challenging but exciting. As I explore these new paths, I welcome any insights or suggestions from those who have made similar transitions. Let's connect and share our journeys.]]></content:encoded>
        </item>
<item>
          <title>Sigh, I really do not like SCRUM</title>
          <link>https://www.ambacelar.com/blog/scrum</link>
          <guid isPermaLink="true">https://www.ambacelar.com/blog/scrum</guid>
          <description>The Missteps of Misusing SCRUM: Navigating Fixed Constraints and Prolonged Crunch Periods</description>
          <author>adi@ambacelar.com (Adilson Bacelar)</author>
          <pubDate>Wed, 10 Jul 2024 00:00:00 GMT</pubDate>
          <content:encoded><![CDATA[## The Triad of Fixed Constraints

### Fixed Delivery Date

A fixed delivery date is a ticking time bomb in any SCRUM implementation. SCRUM thrives on flexibility, with the sprint framework allowing for iterative development and constant reassessment of priorities. However, when a hard deadline looms, the room for this iterative process shrinks dramatically.

### Fixed Budget

Budgets are rarely elastic in corporate environments, and this financial rigidity can stifle the agile spirit of SCRUM. Teams are often pressured to deliver more with less, undermining the iterative process that relies on feedback and adjustment, potentially leading to compromised quality.

### Fixed Scope

The fixed scope is perhaps the most contradictory element to SCRUM's core principles. SCRUM encourages scope evolution through regular sprint reviews and backlog refinement. A rigid scope negates this adaptive planning, forcing teams to deliver a pre-defined set of features irrespective of emerging insights or changing priorities.

## The Impact of Crunch Periods

Crunch periods—prolonged phases of excessive work hours to meet deadlines—are the bane of a healthy SCRUM environment. These periods often lead to burnout, diminished productivity, and lower-quality deliverables, undermining the sustainable pace advocated by agile methodologies.

## The Pitfall of Micromanagement

### The Drive for Visibility

In environments with tight deadlines and fixed constraints, companies often resort to micromanaging, driven by a desire for increased visibility. Anxious about meeting deadlines, stakeholders may demand extensive sprint ceremonies to monitor progress closely. These ceremonies, while intended to improve transparency, can become excessively time-consuming.

### The Burden on Development Teams

Excessive meetings and check-ins detract from the time developers and QA members have to focus on their core tasks. The practices designed to ensure transparency and progress can paradoxically slow development. The constant scrutiny and the need to report frequently can create a stressful atmosphere, reducing overall productivity and morale.

## The Role of the Technical Lead

### Shielding the Team

As a technical lead, one of your crucial responsibilities is to act as a buffer between your team and the business pressures. This means protecting your developers and QA members from excessive micromanagement and allowing them to focus on their tasks without constant interruptions. Balancing transparency by giving your team the space to be productive is essential.

### Managing Expectations

Effective communication with stakeholders is critical. As a technical lead, you must set realistic expectations regarding what can be achieved within the fixed constraints. You must regularly update stakeholders on progress and potential risks, advocating for the team's needs and pushing back against unrealistic demands when necessary.

### Fostering a Positive Environment

Maintaining team morale during crunch periods is challenging but critical. Encourage a culture of open communication where team members feel comfortable voicing concerns. Acknowledge your team's hard work and find ways to mitigate stress, such as flexible working hours or short breaks during intense periods.

## Alternatives to SCRUM for Fixed Constraint Environments

Alternative methodologies may offer a better fit for development teams struggling to maintain SCRUM under fixed constraints and prolonged crunch periods. Each comes with its own set of pros and cons:

### Kanban

**Pros:**

- **Flexibility:** Kanban allows for continuous delivery without the need for fixed-length sprints, making it easier to handle changes in priorities and scope.
- **Visibility:** Visual boards provide clear insight into work progress and bottlenecks.
- **Focus on Flow:** Emphasises smooth workflow and efficiency, which can help manage workload more effectively.

**Cons:**

- **Less Structure:** The lack of structured iterations can lead to less predictability in planning.
- **Potential for Overload:** Without fixed sprints, there is a risk of work piling up if not managed carefully.

### Lean Development

**Pros:**

- **Efficiency:** Focuses on delivering value and eliminating waste, which can be beneficial in constrained environments.
- **Continuous Improvement:** Encourages ongoing process improvements, which can help adapt to fixed constraints over time.

**Cons:**

- **Requires Discipline:** A strong commitment to process improvements and waste elimination is needed.
- **Cultural Shift:** This may require significant team and organisational culture changes to be implemented effectively.

### Waterfall

**Pros:**

- **Clear Structure:** Provides a straightforward, linear approach with defined stages, which can be easier to manage with fixed scope and deadlines.
- **Predictability:** More predictable planning and budgeting can appeal to stakeholders in constrained environments.

**Cons:**

- **Inflexibility:** Less adaptable to changes once the project is underway, which can be problematic if requirements evolve.
- **Late Testing:** Testing is typically deferred until the end of the project, increasing the risk of discovering critical issues late.

## Strategies to Maintain SCRUM Integrity

### Prioritise Backlog Flexibility

Even with a fixed scope, there is often room to prioritise features within the backlog. Engage stakeholders to understand which elements are most critical and which can be deferred or adjusted. This prioritisation helps maintain some level of agility within the rigid framework.

### Transparent Communication

Maintaining transparency with stakeholders is crucial. Regularly communicate the impacts of fixed constraints and crunch periods on the team's performance and the project's quality. Transparency fosters trust and can lead to more reasonable expectations and timelines.

### Iterative Releases

Focus on delivering smaller, incremental releases rather than a monolithic final product. This approach helps manage fixed delivery dates by ensuring continuous progress and providing stakeholders with tangible outputs, which can be invaluable for feedback and adjustments.

### Emphasise Quality over Quantity

Resist the temptation to overload sprints with excessive work. Instead, prioritise high-quality deliverables. Emphasising quality over sheer quantity helps maintain team morale and ensures that what is delivered meets a high standard, even if the scope must be adjusted.

### Limit and Optimise Sprint Ceremonies

While sprint ceremonies are essential for SCRUM, striking a balance is important. Limit meetings to the essential ones and keep them concise. Use asynchronous communication tools to update stakeholders without pulling the team away from development tasks.

### Regular Retrospectives

Never skip retrospectives, even during crunch periods. These sessions are vital for identifying pain points and understanding what's working and what's not. They allow the team to voice concerns and collaboratively seek solutions, ensuring continuous improvement.

### Stakeholder Education

Educate stakeholders on the principles of SCRUM and the negative impacts of fixed constraints, micromanagement, and crunch periods. Helping them understand the value of flexibility can lead to more supportive and realistic project planning.

## Conclusion

While SCRUM is designed to be flexible and adaptive, the reality of fixed delivery dates, budgets, and scope often imposes significant challenges. Coupled with extended crunch periods and the tendency for micromanagement, these constraints can erode the effectiveness of agile methodologies. However, by prioritising backlog flexibility, maintaining transparent communication, focusing on iterative releases, and emphasising quality, teams can uphold the principles of SCRUM and navigate these hurdles. Optimising sprint ceremonies and educating stakeholders ensure that SCRUM practices are maintained without compromising the development process.

As a technical lead, your role is pivotal in shielding your team from undue pressure, managing expectations, and fostering a positive environment. Exploring alternative methodologies like Kanban, Lean Development, or even Waterfall might provide better alignment with the constraints faced. Navigating these challenges is no small feat, but with a committed approach to SCRUM's core values, development teams can still deliver outstanding results without compromising their well-being or the quality of their work.]]></content:encoded>
        </item>
<item>
          <title>Hello, World!</title>
          <link>https://www.ambacelar.com/blog/hello-world</link>
          <guid isPermaLink="true">https://www.ambacelar.com/blog/hello-world</guid>
          <description>I am trying to figure out what my goal is for this site</description>
          <author>adi@ambacelar.com (Adilson Bacelar)</author>
          <pubDate>Fri, 05 Jul 2024 00:00:00 GMT</pubDate>
          <content:encoded><![CDATA[Well, here it is, my personal site. It's quaint... and barebones, but it's mine!

But what am I doing here? I guess that's up to me at whatever time I am in, whether it's a place to put my explorations in tech and programming, a way to incentivise myself to build cool projects and display them, or even if it's a place that has me as the primary audience, as opposed to you, whether you are a recruiter, hiring manager or something with the final say on whether I get hired...

I'm sure my answer to the above will change and wane over time... but that's life?

If I one day launch a successful business so that I don't ever need to look for employment again, I'm sure my motivations on this personal site will change, but for now... I'm introducing myself to the world... a hub page where anyone curious can see any of the projects I'm working on or even the projects I've worked on in the past.

If things go well, I plan to write up my thoughts on the things I'm working on, studying, or creating content.

Welcome to the theatre of the mind!]]></content:encoded>
        </item>
    </channel>
  </rss>