[{"data":1,"prerenderedAt":813},["ShallowReactive",2],{"/en-us/blog/moving-to-gitlab-yes-its-worth-it":3,"navigation-en-us":33,"banner-en-us":443,"footer-en-us":453,"blog-post-authors-en-us-Fabio Akita":695,"blog-related-posts-en-us-moving-to-gitlab-yes-its-worth-it":709,"blog-promotions-en-us":749,"next-steps-en-us":803},{"id":4,"title":5,"authorSlugs":6,"body":8,"categorySlug":9,"config":10,"content":14,"description":8,"extension":22,"isFeatured":12,"meta":23,"navigation":24,"path":25,"publishedDate":20,"seo":26,"stem":30,"tagSlugs":31,"__hash__":32},"blogPosts/en-us/blog/moving-to-gitlab-yes-its-worth-it.yml","Moving To Gitlab Yes Its Worth It",[7],"fabio-akita",null,"open-source",{"slug":11,"featured":12,"template":13},"moving-to-gitlab-yes-its-worth-it",false,"BlogPost",{"title":15,"description":16,"authors":17,"heroImage":19,"date":20,"body":21,"category":9},"Customer Story: Moving to GitLab! Yes, it's worth it!","Migrating from GitHub to GitLab and setting up your own GitLab instance",[18],"Fabio Akita","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749665885/Blog/Hero%20Images/love-the-sun-gitlab.jpg","2016-08-04","**Note:** This post is a customer story on the benefits of migrating from GitHub to GitLab, by Fabio Akita, a Brazilian Rubyist.\n\n\nI started [evangelizing Git in 2007][evang]. It was a very tough sell to make at the time.\n\nOutside of the kernel development almost no one wanted to learn it and we had very worthy competitors, from Subversion, to Mercurial, to Bazaar, to Darcs, to Perforce, and so on. But those of use that dug deeper knew that Git had the edge and it was a matter of time.\n\nThen GitHub showed up in 2008 and the rest is history. For many years it was just \"cool\" to be in GitHub. The Ruby community drove GitHub up into the sky. Finally it became the status quo and the one real monopoly in information repositories - not just software source code, but everything.\n\nI always knew that we should have a \"local\" option, which is why I tried to [contribute to Gitorious][gitorious] way back in 2009. Other options arose, but eventually GitLab appeared around 2011 and picked up steam in the last couple of years.\n\nGitHub itself raised [USD 350 million in funding][gh-fund] and one of its required goals is to nail the Enterprise Edition for big corporations that don't want their data outside their closed gardens. Although GitHub hosts every single open source project out there, they are themselves closed-source.\n\n[GitLab Inc.][GL] started differently with an open source-first approach with their Community Edition (CE) and having both a GitHub-like hosted option as well as a supported Enterprise Edition for fearsome corporations. They already raised [USD 5.62 million in funding][gl-fund], and they are the most promising alternative to GitHub so far.\n\n\u003C!-- more -->\n\nOf course, there are other platforms such as Atlassian's Bitbucket. But I believe Atlassian's strategy is slower and they have a larger suite of enterprise products to sell first, such as Confluence and Jira. I don't think they ever posed much of a competition against GitHub.\n\nGitLab really started accelerating in 2015 as this [commit graph][comm-graph] shows:\n\n![contributors to gitlabhq](https://about.gitlab.com/images/blogimages/moving-to-gitlab-yes-its-worth-it/contributors-to-gitlabhq.png)\n\nIt's been steadily growing since 2011, but they seem to have crossed the first tipping point around late 2014, from early adopters to the early majority. This became more important as **GitHub** announced their [pricing changes][gh-prices] in May.\n\nThey said they haven't committed to a dead line to enforce the change, so organizations can opt out of the new format for the time being. They are changing from \"limited repositories and unlimited users\" to \"unlimited repositories and limited users\".\n\n## The Cost-Benefit Conundrum\n\nFor example, if you have up to 8 developers in the USD 50/month (20 private repositories), the change won't affect you, as you will pay USD 25/month for 5 users and USD 9 for additional users (total of USD 52/month).\n\nNow, if you have a big team of 100 developers currently in the Diamond Plan of USD 450/month (300 private repositories), you would have to pay USD 25/month + 95 times USD 9, which totals a staggering USD 880/month! **Double the amount!**\n\nThis is an **extra USD 10,560** per year!\n\n\nAnd what does **GitLab** affords you instead?\n\nYou can have way more users and more repositories in a **USD 40/month** virtual box (4GB of RAM, 60GB SSD, 4TB transfer).\n\n\nAnd it doesn't stop there. GitLab also has very functional [GitLab Multi Runner][runner] which you can install in a separate box (actually, at least 3 boxes - more on that below).\n\nYou can easily connect this runner to the build system over GitLab so every new git push trigger the runner to run the automated test suite in a Docker image of your choosing. So it's a fully functional, full featured Continuous Integration system nicely integrated in your GitLab project interface:\n\n![pipelines](https://about.gitlab.com/images/blogimages/moving-to-gitlab-yes-its-worth-it/pipelines-cm42-archived-gitlab.png)\n\n![builds](https://about.gitlab.com/images/blogimages/moving-to-gitlab-yes-its-worth-it/test-144-builds-cm42-archived-gitlab.png)\n\nReminds of you anything? Yep, it's a fully functional alternative to Travis-CI, Semaphore, CircleCI or any other CI you're using with a very easy to install procedure. Let's say you're paying **Travis-CI USD 489/month to have 10 concurrent jobs**.\n\nYou can install **GitLab Runner** in **3 boxes of USD 10/month** (1GB RAM, 1 Cores, 30GB SSD) and have way more concurrent jobs (20? 50? Auto-Scale!?) that **runs faster** (in a simple test, one build took 15 minutes over Travis took less than 8 minutes at Digital Ocean).\n\nSo let's make the math for a year's worth of service. First considering no GitHub plan change:\n\nUSD 5,400 (**GitHub**) + USD 5,868 (**Travis**) = **USD 11,268 a year**.\n\n\nNow, the GitLab + GitLab Runner + Digital Ocean for the same features and unlimited users, unlimited repositories, unlimited concurrent builds:\n\nUSD 480 (**GitLab**) + USD 840 (**Runner box**) = **USD 1,320 a year**.\n\n\nThis is already **almost 8.5x cheaper** with almost **no change in quality**.\n\nFor the worst case scenario, compare it when **GitHub** decides to enforce the new plans:\n\nUSD 10,560 (**GitHub new plans**) + USD 5,868 (**Travis**) = **USD 16,428**\n\n\nNow the **GitLab** option is **11x cheaper**! You're **saving almost USD 15,000 a year!** This is not something you can ignore in your cost sheet.\n\n\nAs I said, the calculations above are only significant in a scenario of a 100 developers. You must do your own math taking into account your team size and number of active projects (you can always archive unused projects).\n\nEven if you don't have 100 developers. Let's consider the scenario for 30 developers in the new GitHub per user plans and a smaller Travis configuration for 5 concurrent jobs:\n\nUSD 3,000 (**GitHub new plan**) + USD 3,000 (**Travis**) = **USD 6,000**\n\n\nIt's **4.5x cheaper** in the **Digital Ocean + GitLab** suite option.\n\n\nHeck, let's consider the Current **GitHub** plan (the Platinum one, for up to 125 repositories):\n\nUSD 2,400 (**GitHub current plan**) + USD 3,000 (**Travis**) = **USD 5,400**\n\n\nStill **at least 4x more expensive** than a **GitLab-based** solution!\n\n\nAnd how long will it take for a single developer to figure out the set up and migrate everything from GitHub over to the new **GitLab** installation? I will say that you can reserve 1 week of work for the average programmer to do it following the official documentation and my tips and tricks below.\n\n## Installing GitLab CE\n\nI will not bore you with what you can readily find over the Web. I highly recommend you start with the easiest solution first: [Digital Ocean's One-Click Automatic Install][do-inst]. Install it in at least a 4GB RAM machine (you will want to keep it if you like it).\n\nOf course, there is a number of different installation options, from AWS AMI images to Ubuntu packages you can install manually. Study the [documentation].\n\nIt will cost you **USD 40 for a month of trial**. If you want to **save** as much as **tens of thousands of dollar**, this is a bargain.\n\nGitLab has many customization options. You can lock down your private GitLab to allow only users with an official e-mail from your domain, for example. You can configure [OAuth2 providers][omni-auth] so your users can quickly sign in using their GitHub, Facebook, Google or other accounts.\n\n### A Few Gotchas\n\nI've stumbled upon a few caveats in the configuration. Which is why I recommend that you plan ahead - study this entire article ahead of time! -, do a quick install that you can blow away, so you can \"feel\" the environment before trying to migrate all your repos over to your brand new GitLab. As a reference, this is a part of my `/etc/gitlab/gitlab.rb`:\n\n```shell\n# register a domain for your server and place it here:\nexternal_url \"http://my-gitlab-server.com/\"\n\n# you will want to enable [LFS](https://git-lfs.github.com)\ngitlab_rails['lfs_enabled'] = true\n\n# register your emails\ngitlab_rails['gitlab_email_from'] = \"no-reply@my-gitlab-server.com\"\n\n# add your email configuration (template for gmail)\ngitlab_rails['smtp_enable'] = true\ngitlab_rails['smtp_address'] = \"smtp.gmail.com\"\ngitlab_rails['smtp_port'] = 587\ngitlab_rails['smtp_user_name'] = \"-- some no-reply email ---\"\ngitlab_rails['smtp_password'] = \"-- the password ---\"\ngitlab_rails['smtp_domain'] = \"my-gitlab-server.com\"\ngitlab_rails['smtp_authentication'] = \"login\"\ngitlab_rails['smtp_enable_starttls_auto'] = true\ngitlab_rails['smtp_openssl_verify_mode'] = 'peer'\n\n# this is where you enable oauth2 integration\ngitlab_rails['omniauth_enabled'] = true\n\n# CAUTION!\n# This allows users to login without having a user account first. Define the allowed providers\n# using an array, e.g. [\"saml\", \"twitter\"], or as true/false to allow all providers or none.\n# User accounts will be created automatically when authentication was successful.\ngitlab_rails['omniauth_allow_single_sign_on'] = ['github', 'google_oauth2', 'bitbucket']\ngitlab_rails['omniauth_block_auto_created_users'] = true\n\ngitlab_rails['omniauth_providers'] = [\n  {\n    \"name\" => \"github\",\n    \"app_id\" => \"-- github app id --\",\n    \"app_secret\" => \"-- github secret --\",\n    \"url\" => \"https://github.com/\",\n    \"args\" => { \"scope\" => \"user:email\" }\n  },\n  {\n    \"name\" => \"google_oauth2\",\n    \"app_id\" => \"-- google app id --\",\n    \"app_secret\" => \"-- google secret --\",\n    \"args\" => { \"access_type\" => \"offline\", \"approval_prompt\" => '', hd => 'codeminer42.com' }\n  },\n  {\n    \"name\" => \"bitbucket\",\n    \"app_id\" => \"-- bitbucket app id --\",\n    \"app_secret\" => \"-- bitbucket secret id --\",\n    \"url\" => \"https://bitbucket.org/\"\n  }\n]\n\n# if you're importing repos from GitHub, Sidekiq workers can grow as high as 2.5GB of RAM and the default [Sidekiq Killer](https://docs.gitlab.com/operations/sidekiq_memory_killer/) config will cap it down to 1GB, so you want to either disable it by adding '0' or adding a higher limit\ngitlab_rails['env'] = { 'SIDEKIQ_MEMORY_KILLER_MAX_RSS' => '3000000' }\n```\n\nThere are [dozens of default variables][vars] you can [override], just be careful on your testings.\n\nEvery time you change a configuration, you can just run the following commands:\n\n```shell\nsudo gitlab-ctl reconfigure\nsudo gitlab-ctl restart\n```\n\nYou can open a Rails console to inspect production objects like this:\n\n```shell\ngitlab-rails console\n```\n\nI had a lot of trouble importing big repos from GitHub, but after a few days debugging the problem with GitLab Core Team developers [Douglas Alexandre][douglas], [Gabriel Mazetto][gabriel], a few Merge Requests and some local patching and I was finally able to import relatively big projects (more than 5,000 commits, more than 1,000 issues, more than 1,200 pull requests with several comments worth of discussion threads). A project of this size can take a couple of hours to complete, mainly because it's damn slow to use GitHub's public APIs (they are slow and they have rate limits and abuse detection, so you can't fetch everything as fast as your bandwidth would allow).\n\n(By the way, don't miss GitLab will be over at [Rubyconf Brazil 2016][conf], on Sep 23-24)\n\nMigrating all my GitHub projects took a couple of days, but they all went through smoothly and my team didn't have any trouble, just adjusting their git remote URLs and they're done.\n\nThe import procedure from GitHub is quite complete, it brings not only the git repo per se, but also all the metadata, from labels to comments and pull request history - which is the one that usually takes more time.\n\nBut I'd recommend waiting for at least version 8.11 (it's currently 8.10.3) before trying to import large GitHub projects.\n\nIf you're on Bitbucket, unfortunately there are less features in the importer. It will mostly just bring the source code. So be aware of that if you extensively depend on their pull request system and you want to preserve this history. More feature will come and you can even help them out, they are very resourceful and willing to make GitLab better.\n\n## Side-track: Customizations for every Digital Ocean box\n\nAssume that you should run what's in this section for all new machines you create over Digital Ocean.\n\nFirst of all, they come without a swap file. No matter how much RAM you have, the Linux OS is meant to work better by combining a swap file. You can [read more about it][do-ub] later, for now just run the following as root:\n\n```shell\nfallocate -l 4G /swapfile\nchmod 600 /swapfile\nmkswap /swapfile\nswapon /swapfile\n\nsysctl vm.swappiness=10\nsysctl vm.vfs_cache_pressure=50\n```\n\nEdit the `/etc/fstab` file and add this line:\n\n```shell\n/swapfile   none    swap    sw    0   0\n```\n\nDon't forget to set the [default locale][locale] of your machine. Start by editing the `/etc/environment` file and adding:\n\n```shell\nLC_ALL=en_US.UTF-8\nLANG=en_US.UTF-8\n```\n\nThen run:\n\n```shell\nsudo locale-gen en_US en_US.UTF-8\nsudo dpkg-reconfigure locales\n```\n\nFinally, you should have Ubuntu automatically install stable security patches for you. You don't want to forget machines online without the most current security fixes, so just run this:\n\n```shell\nsudo dpkg-reconfigure --priority=low unattended-upgrades\n```\n\nChoose \"yes\" and you're done. And of course, for every fresh install, it's always good to run the good old:\n\n```shell\nsudo apt-get update && sudo apt-get upgrade\n```\n\nThis is the very basics, I believe it's easier to have an image with all this ready, but if you use the standard Digital Ocean images, these settings should do the trick for now.\n\n## Installing the CI Runner\n\nOnce you finish your GitLab installation, it's [super easy][inst-gl-run] to deploy the GitLab Runner. You can use the same machine but I recommend you install it in a separate machine.\n\nIf you don't know what a runner is, just imagine it like this: It's basically a server connected to the GitLab install. When it's available and online, whenever someone pushes a new commit, merge request, to a repository that has a `gitlab-ci-yml` file present, GitLab will push a command to the runner.\n\nDepending on how you configured the runner, it will receive this command and spawn a new Docker container. Inside the container it will execute whatever you have defined in the `gitlab-ci.yml` file in the project. Usually it's fetching cached files (dependencies, for example), and run your test suite.\n\nIn the most basic setup, you will only have one Runner and any subsequent builds from other users will wait in line until they finish. If you've used external CI services such as Travis-CI or CircleCI, you know that they charge for some number of concurrent builds. And it's **very expensive**.\n\nThe less concurrent builds available, the more your users will have to wait for feedback on their changes, and less productive you will become. People may even start to avoid adding new tests, or completely ignore the tests, which will really hurt the quality of your project over time. If there is one thing you **must not** do is not having good automated test suites.\n\nGabriel Mazetto pointed me to a very important GitLab CI Runner feature: [auto-scaling]. This is what they use in their hosted offering over at [GitLab.com].\n\nYou can easily set up a runner that can use \"docker-machine\" and your IaaS provider APIs to spin up machines on the fly to run as many concurrent builds as you want, and it will be super cheap!\n\nFor example, on Digital Ocean you can be charged USD 0.06 (6 cents) per hour of usage of a 4GB machine. Over at AWS EC2 you can be charged USD 0.041 per hour for an m3.medium machine.\n\nThere is extensive documentation but I will try to summarize what you have to do. For more details I highly recommend you to study their [official documentation][doc-runner].\n\nStart by creating 3 new machines at Digital Ocean, all in the same Region with private networking enabled! I will list a fake private IP address just for the sake of advancing in the configuration examples:\n\n- a 1GB machine called \"docker-registry-mirror\", (ex 10.0.0.1)\n- a 1GB machine called \"ci-cache\", (ex 10.0.0.2)\n- a 1GB machine called \"ci-runner\", (ex 10.0.0.3)\n\nYeah, they can be small as very little will run on them. You can be conservative and choose the 2GB RAM options just to be on the safe side (and pricing will still be super cheap).\n\nDon't forget to execute the basic configuration I mentioned above to enable a swapfile, auto security update and locale regeneration.\n\nSSH in to \"docker-registry-mirror\" and just run:\n\n```shell\ndocker run -d -p 6000:5000 \\\n    -e REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io \\\n    --restart always \\\n    --name registry registry:2\n\n```\n\nNow you will have a local Docker images registry proxy and cache at 10.0.0.1:6000 (take note of the real private IP).\n\nSSH in to \"ci-cache\" and run:\n\n```shell\nmkdir -p /export/runner\n\ndocker run -it --restart always -p 9005:9000 \\\n        -v /.minio:/root/.minio -v /export:/export \\\n        --name minio \\\n        minio/minio:latest /export\n\n```\n\nNow you will have an AWS S3 clone called [Minio] running. I didn't know this project even existed, but it is a nifty little service written in Go to clone the AWS S3 behavior and APIs. So now you can have your very own S3 inside your infrastructure!\n\nAfter Docker spin ups, it will print out the Access Key and Secret keys, make notes. And this service will be running at `10.0.0.2:9005`.\n\nYou can even open a browser and see their web interface at `http://10.0.0.2:9005` and use the access and secret keys to login. Make sure you have a bucket named \"runner\". The files will be stored at the `/export/runner` directory.\n\n![Minio browser](https://about.gitlab.com/images/blogimages/moving-to-gitlab-yes-its-worth-it/minio-browser.png)\n\nMake sure the [bucket name is valid][bucket] (it must be a valid DNS naming, for example, **DO NOT use underlines**).\n\nOpen this URL from your freshly installed **GitLab CE**: `http://yourgitlab.com/admin/runners` and take note of the **Registration Token**. Let's say it's `1aaaa_Z1AbB2CdefGhij`\n\n![admin area](https://about.gitlab.com/images/blogimages/moving-to-gitlab-yes-its-worth-it/admin-area-gitlab.png)\n\nFinally, SSH in to \"ci-runner\" and run:\n\n```shell\ncurl -L https://github.com/docker/machine/releases/download/v0.7.0/docker-machine-`uname -s`-`uname -m` > /usr/local/bin/docker-machine\n\nchmod +x /usr/local/bin/docker-machine\n\ncurl -L https://packages.gitlab.com/install/repositories/runner/gitlab-ci-multi-runner/script.deb.sh | sudo bash\n\nsudo apt-get install gitlab-ci-multi-runner\n\nrm -Rf ~/.docker # just to make sure\n```\n\nNow you can register this new runner with your GitLab install, you will need the Registration Token mentioned above.\n\n```shell\nsudo gitlab-ci-multi-runner register\n```\n\nYou will be asked a few questions, and this is what you can answer:\n\n```shell\nPlease enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/ci )\nhttps://yourgitlab.com/ci\nPlease enter the gitlab-ci token for this runner\n1aaaa_Z1AbB2CdefGhij # as in the example above\nPlease enter the gitlab-ci description for this runner\nmy-autoscale-runner\nINFO[0034] fcf5c619 Registering runner... succeeded\nPlease enter the executor: shell, docker, docker-ssh, docker+machine, docker-ssh+machine, ssh?\ndocker+machine\nPlease enter the Docker image (eg. ruby:2.1):\ncodeminer42/ci-ruby:2.3\nINFO[0037] Runner registered successfully. Feel free to start it, but if it's\nrunning already the config should be automatically reloaded!\n```\n\nLet's make a copy of the original configuration, just to be safe:\n\n```shell\ncp /etc/gitlab-runner/config.toml /etc/gitlab-runner/config.bak\n```\n\nCopy the first few lines of this file (you want the token), it will look like this:\n\n```shell\nconcurrent = 1\ncheck_interval = 0\n\n[[runners]]\n  name = \"my-autoscale-runner\"\n  url = \"http://yourgitlab.com/ci\"\n  token = \"--- generated runner token ---\"\n  executor = \"docker+machine\"\n\n```\n\nThe important part here is the \"token\". You will want to take note of it. And now you also will want to create a [new API Token over at Digital Ocean][do-tok]. Just Generate a New Token and take note.\n\nYou can now replace the entire `config.toml` file for this:\n\n```toml\nconcurrent = 20\ncheck_interval = 0\n\n[[runners]]\n  name = \"my-autoscale-runner\"\n  url = \"http://yourgitlab.com/ci\"\n  token = \"--- generated runner token ---\"\n  executor = \"docker+machine\"\n  limit = 15\n  [runners.docker]\n    tls_verify = false\n    image = \"codeminer42/ci-ruby:2.3\"\n    privileged = false\n  [runners.machine]\n    IdleCount = 2                   # There must be 2 machines in Idle state\n    IdleTime = 1800                 # Each machine can be in Idle state up to 30 minutes (after this it will be removed)\n    MaxBuilds = 100                 # Each machine can handle up to 100 builds in a row (after this it will be removed)\n    MachineName = \"ci-auto-scale-%s\"   # Each machine will have a unique name ('%s' is required)\n    MachineDriver = \"digitalocean\"  # Docker Machine is using the 'digitalocean' driver\n    MachineOptions = [\n        \"digitalocean-image=coreos-beta\",\n        \"digitalocean-ssh-user=core\",\n        \"digitalocean-access-token=-- your new Digital Ocean API Token --\",\n        \"digitalocean-region=nyc1\",\n        \"digitalocean-size=4gb\",\n        \"digitalocean-private-networking\",\n        \"engine-registry-mirror=http://10.0.0.1:6000\"\n    ]\n  [runners.cache]\n    Type = \"s3\"   # The Runner is using a distributed cache with Amazon S3 service\n    ServerAddress = \"10.0.0.2:9005\"  # minio\n    AccessKey = \"-- your minio access key --\"\n    SecretKey = \"-- your minio secret key\"\n    BucketName = \"runner\"\n    Insecure = true # Use Insecure only when using with Minio, without the TLS certificate enabled\n\n```\n\nAnd you can restart the runner to pick up the new configuration like this:\n\n```shell\ngitlab-ci-multi-runner restart\n```\n\nAs I said before, you will want to read the extensive [official documentation][auto-sc-doc] (and every link within).\n\nIf you did everything right, changing the correct private IPs for the docker registry and cache, the correct tokens, and so forth, you can log in to your Digital Ocean dashboard and you will see something like this:\n\n![DO droplets](https://about.gitlab.com/images/blogimages/moving-to-gitlab-yes-its-worth-it/digital-ocean-droplets.png)\n\nAnd from the `ci-runner` machine, you can list them like this:\n\n```shell\n# docker-machine ls\n\nNAME                                    ACTIVE        DRIVER   STATE URL            SWARM   DOCKER    ERRORS\nrunner-xxxx-ci-auto-scale-xxxx-xxxx  -  digitalocean  Running tcp://191.168.0.1:237    v1.10.3\nrunner-xxxx-ci-auto-scale-xxxx-xxxx  -  digitalocean  Running tcp://192.169.0.2:2376   v1.10.3\n```\n\nThey should not list any errors, meaning that they are up and running, waiting for new builds to start.\n\nThere will be 2 new machines listed in your Digital Ocean dashboard, named \"runner-xxxxx-ci-auto-scale-xxxxx\". This is what `IdleCount = 2` does. If they stay idle for more than 30 minutes (`IdleTime = 1800`) they will be shut down so you don't get charged.\n\nYou can have several \"runner\" definitions, each with a `limit` of builds/machines that can be spawned in Digital Ocean. You can have other runner definitions for other providers, for example. But in this example we are limited to at most 15 machines, so 15 concurrent builds.\n\nThe `concurrent` limit is a global setting. So if I had 3 runner definitions, each with a `limit` of 15, they would still be globally limited to 20 as defined in the `concurrent` global variable.\n\nYou can use different providers for specific needs, for example, to run macOS builds or Raspberry Pi builds or other exotic kinds of builds. In the example I am keeping it simple and just setting many builds in the same provider (Digital Ocean).\n\nAnd don't worry about the monthly fee for each machine. When used in this manner, you will be paying per hour.\n\nAlso, make sure you spun up all your machines (docker-registry, minio cache, CI runner) all with private networking enabled (so they talk through the internal VLAN instead of having to go all the way through the public internet) and that they are all in the same region data center (NYC1 is New York 1 - New York has 3 sub-regions, for example). Don't start machines in different regions.\n\nBecause we have Docker proxy/cache and Minio/S3 cache, your builds will take longer the first time (let's say, 5 minutes), and then subsequent build will fetch everything from the cache (taking, let's say, 1:30 minute). It's fast and it's convenient.\n\n## Setting up each Project for the Runner\n\nThe Runner is one of the newest pieces of the GitLab ecosystem so you might have some trouble at first to figure out a decent configuration. But once you have the whole infrastructure figured out as described in the previous section, now it's as easy as adding a `.gitlab-ci.yml` file to your root directory. Something like this:\n\n```yaml\n# This file is a template, and might need editing before it works on your project.\nimage: codeminer42/ci-ruby:2.3\n\n# Pick zero or more services to be used on all builds.\n# Only needed when using a docker container to run your tests in.\n# Check out: https://docs.gitlab.com/ci/docker/using_docker_images/#what-is-service\nservices:\n  - postgres:latest\n  - redis:latest\n\ncache:\n  key: your-project-name\n  untracked: true\n  paths:\n    - .ci_cache/\n\nvariables:\n  RAILS_ENV: 'test'\n  DATABASE_URL: postgresql://postgres:@postgres\n  CODECLIMATE_REPO_TOKEN: -- your codeclimate project token --\n\nbefore_script:\n  - bundle install --without development production -j $(nproc) --path .ci_cache\n  - cp .env.sample .env\n  - cp config/database.yml.example config/database.yml\n  - bundle exec rake db:create db:migrate\n\ntest:\n  script:\n    - xvfb-run bundle exec rspec\n\n```\n\nMy team at [Codeminer 42][codeminer] prepared a simple [Docker image] with useful stuff pre-installed (such as the newest phantomjs, xvfb, etc), so it's now super easy to enable automated builds within GitLab by just adding this file to the repositories. (Thanks to Carlos Lopes, Danilo Resende and Paulo Diovanni - who will be talking about [Docker at Rubyconf Brasil 2016][docker-conf], by the way).\n\nGitLab CI even supports building a pending Merge Request, and you can enforce the request so it can only be merged if builds pass, just like in GitHub + Travis. And as Code Climate is agnostic to Repository host or CI runner, you can easily integrate it as well.\n\n![merge requests](https://about.gitlab.com/images/blogimages/moving-to-gitlab-yes-its-worth-it/settings-codeminer42-cm-fulcrum-gitlab.png)\n\n## Conclusion\n\nThe math is hard to argue against: the GitLab + GitLab CI + Digital Ocean combo is a big win. GitLab's interface is very familiar so users from GitHub or Bitbucket will feel quite at home in no time.\n\nWe can use all the [Git flows] we're used to.\n\n**GitLab CE** is still a work in progress though, the team is increasing their pace but there are currently more than [4,200 open issues][gl-issues]. But as this is all Ruby on Rails and Ruby tooling, you can easily jump in and contribute. No contribution is too small. Just by reporting how to reproduce a bug is help enough to assist the developers to figure out how to improve faster.\n\nBut don't shy away because of the open issues, it's fully functional as of right now and I have not found any bugs that could be considered show stoppers.\n\nThey have many things right. First of all, it's a \"simple\" Ruby on Rails project. It's a no-thrills front-end with plain JQuery. The choice of HAML for the views is questionable but it doesn't hurt. They use good old Sidekiq+Redis for asynchronous jobs. No black magic here. A pure monolith that's not difficult to understand and to contribute.\n\nThe APIs are all written using Grape. They have the [GitLab CE][ce] project separated from other components, such as the [GitLab Shell][shell] and [GitLab CI Multi-Runner][run].\n\nThey also forked [Omnibus][omn] in order to be able to package the CE Rails project as a \".deb\". Everything is orchestrated with Docker. And when a new version is available, you only need to `apt-get update && apt-get upgrade` and it will do all the work of backing up and migrating Postgresql, updating the code, bundling in new dependencies, restarting the services and so forth. It's super convenient and you should take a look at this project if you have complicated Rails deployments into your own infrastructure (out of Heroku, for example).\n\nI am almost done moving hundreds of repositories from both Bitbucket and GitHub to GitLab right now and the developers from my company are already using it in a daily basis without any problems. We are almost at the point where we can disengage from Bitbucket, GitHub and external CIs.\n\nYou will be surprised how **easy your company can do it** too and **save a couple thousand dollars** in the process, while **having fun doing it**!\n\n----\n\n_**Note:** this article was originally posted by [AkitaOnRails]._\n\n\u003C!-- identifiers -->\n\n[AkitaOnRails]: http://www.akitaonrails.com/2016/08/03/moving-to-gitlab-yes-it-s-worth-it\n[auto-sc-doc]: https://gitlab.com/gitlab-org/gitlab-runner/blob/master/docs/configuration/autoscale.md\n[auto-scaling]: https://docs.gitlab.com/releases/\n[bucket]: http://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html\n[ce]: https://gitlab.com/gitlab-org/gitlab-ce\n[codeminer]: http://www.codeminer42.com/\n[comm-graph]: https://github.com/gitlabhq/gitlabhq/graphs/contributors?from=2015-03-14&to=2016-08-02&type=c\n[conf]: http://www.rubyconf.com.br/pt-BR/speakers#Gabriel%20Gon%C3%A7alves%20Nunes%20Mazetto\n[do-inst]: https://www.digitalocean.com/features/one-click-apps/gitlab/\n[do-tok]: https://www.digitalocean.com/community/tutorials/how-to-use-the-digitalocean-api-v2\n[do-ub]: https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-14-04\n[doc-runner]: https://gitlab.com/gitlab-org/gitlab-runner/blob/master/docs/install/autoscaling.md#prepare-the-docker-registry-and-cache-server\n[Docker image]: https://hub.docker.com/r/codeminer42/ci-ruby/\n[docker-conf]: http://www.rubyconf.com.br/pt-BR/speakers#Paulo%20Diovani%20Gon%C3%A7alves\n[documentation]: /install/\n[douglas]: https://gitlab.com/dbalexandre\n[evang]: http://www.akitaonrails.com/2007/9/22/jogar-pedra-em-gato-morto-por-que-subversion-no-presta\n[gabriel]: https://gitlab.com/brodock\n[gh-fund]: https://www.crunchbase.com/organization/github#/entity\n[gh-prices]: https://github.com/blog/2164-introducing-unlimited-private-repositories\n[Git flows]: /2014/09/29/gitlab-flow/\n[GitLab.com]: https://gitlab.com/users/sign_in\n[gitorious]: https://gitorious.org/gitorious/oboxodo-gitorious?p=gitorious:oboxodo-gitorious.git;a=search;h=9f6bdf5887c65a440bc3fdc43a14652f42ddf103;s=Fabio+Akita;st=committer\n[gl-fund]: https://www.crunchbase.com/organization/gitlab-com#/entity\n[gl-issues]: https://gitlab.com/gitlab-org/gitlab-ce/issues\n[gl]: /\n[inst-gl-run]: /blog/how-to-set-up-gitlab-runner-on-digitalocean/\n[locale]: http://askubuntu.com/questions/162391/how-do-i-fix-my-locale-issue\n[Minio]: https://github.com/minio/minio\n[omn]: https://gitlab.com/gitlab-org/omnibus-gitlab\n[omni-auth]: https://docs.gitlab.com/integration/omniauth/\n[override]: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/settings/environment-variables.md\n[run]: https://gitlab.com/gitlab-org/gitlab-runner\n[runner]: https://gitlab.com/gitlab-org/gitlab-runner\n[shell]: https://gitlab.com/gitlab-org/gitlab-shell\n[vars]: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/files/gitlab-cookbooks/gitlab/attributes/default.rb#L57\n","yml",{},true,"/en-us/blog/moving-to-gitlab-yes-its-worth-it",{"title":15,"description":16,"ogTitle":15,"ogDescription":16,"noIndex":12,"ogImage":19,"ogUrl":27,"ogSiteName":28,"ogType":29,"canonicalUrls":27},"https://about.gitlab.com/blog/moving-to-gitlab-yes-its-worth-it","https://about.gitlab.com","article","en-us/blog/moving-to-gitlab-yes-its-worth-it",[],"E1oqPiLINk5W-N8ROY3bC7MluG8PAk8TTw1rT5JxEo0",{"data":34},{"logo":35,"freeTrial":40,"sales":45,"login":50,"items":55,"search":363,"minimal":394,"duo":413,"switchNav":422,"pricingDeployment":433},{"config":36},{"href":37,"dataGaName":38,"dataGaLocation":39},"/","gitlab logo","header",{"text":41,"config":42},"Get free trial",{"href":43,"dataGaName":44,"dataGaLocation":39},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com&glm_content=default-saas-trial/","free trial",{"text":46,"config":47},"Talk to sales",{"href":48,"dataGaName":49,"dataGaLocation":39},"/sales/","sales",{"text":51,"config":52},"Sign in",{"href":53,"dataGaName":54,"dataGaLocation":39},"https://gitlab.com/users/sign_in/","sign in",[56,83,178,183,284,344],{"text":57,"config":58,"cards":60},"Platform",{"dataNavLevelOne":59},"platform",[61,67,75],{"title":57,"description":62,"link":63},"The intelligent orchestration platform for DevSecOps",{"text":64,"config":65},"Explore our Platform",{"href":66,"dataGaName":59,"dataGaLocation":39},"/platform/",{"title":68,"description":69,"link":70},"GitLab Duo Agent Platform","Agentic AI for the entire software lifecycle",{"text":71,"config":72},"Meet GitLab Duo",{"href":73,"dataGaName":74,"dataGaLocation":39},"/gitlab-duo-agent-platform/","gitlab duo agent platform",{"title":76,"description":77,"link":78},"Why GitLab","See the top reasons enterprises choose GitLab",{"text":79,"config":80},"Learn more",{"href":81,"dataGaName":82,"dataGaLocation":39},"/why-gitlab/","why gitlab",{"text":84,"left":24,"config":85,"link":87,"lists":91,"footer":160},"Product",{"dataNavLevelOne":86},"solutions",{"text":88,"config":89},"View all Solutions",{"href":90,"dataGaName":86,"dataGaLocation":39},"/solutions/",[92,116,139],{"title":93,"description":94,"link":95,"items":100},"Automation","CI/CD and automation to accelerate deployment",{"config":96},{"icon":97,"href":98,"dataGaName":99,"dataGaLocation":39},"AutomatedCodeAlt","/solutions/delivery-automation/","automated software delivery",[101,105,108,112],{"text":102,"config":103},"CI/CD",{"href":104,"dataGaLocation":39,"dataGaName":102},"/solutions/continuous-integration/",{"text":68,"config":106},{"href":73,"dataGaLocation":39,"dataGaName":107},"gitlab duo agent platform - product menu",{"text":109,"config":110},"Source Code Management",{"href":111,"dataGaLocation":39,"dataGaName":109},"/solutions/source-code-management/",{"text":113,"config":114},"Automated Software Delivery",{"href":98,"dataGaLocation":39,"dataGaName":115},"Automated software delivery",{"title":117,"description":118,"link":119,"items":124},"Security","Deliver code faster without compromising security",{"config":120},{"href":121,"dataGaName":122,"dataGaLocation":39,"icon":123},"/solutions/application-security-testing/","security and compliance","ShieldCheckLight",[125,129,134],{"text":126,"config":127},"Application Security Testing",{"href":121,"dataGaName":128,"dataGaLocation":39},"Application security testing",{"text":130,"config":131},"Software Supply Chain Security",{"href":132,"dataGaLocation":39,"dataGaName":133},"/solutions/supply-chain/","Software supply chain security",{"text":135,"config":136},"Software Compliance",{"href":137,"dataGaName":138,"dataGaLocation":39},"/solutions/software-compliance/","software compliance",{"title":140,"link":141,"items":146},"Measurement",{"config":142},{"icon":143,"href":144,"dataGaName":145,"dataGaLocation":39},"DigitalTransformation","/solutions/visibility-measurement/","visibility and measurement",[147,151,155],{"text":148,"config":149},"Visibility & Measurement",{"href":144,"dataGaLocation":39,"dataGaName":150},"Visibility and Measurement",{"text":152,"config":153},"Value Stream Management",{"href":154,"dataGaLocation":39,"dataGaName":152},"/solutions/value-stream-management/",{"text":156,"config":157},"Analytics & Insights",{"href":158,"dataGaLocation":39,"dataGaName":159},"/solutions/analytics-and-insights/","Analytics and insights",{"title":161,"items":162},"GitLab for",[163,168,173],{"text":164,"config":165},"Enterprise",{"href":166,"dataGaLocation":39,"dataGaName":167},"/enterprise/","enterprise",{"text":169,"config":170},"Small Business",{"href":171,"dataGaLocation":39,"dataGaName":172},"/small-business/","small business",{"text":174,"config":175},"Public Sector",{"href":176,"dataGaLocation":39,"dataGaName":177},"/solutions/public-sector/","public sector",{"text":179,"config":180},"Pricing",{"href":181,"dataGaName":182,"dataGaLocation":39,"dataNavLevelOne":182},"/pricing/","pricing",{"text":184,"config":185,"link":187,"lists":191,"feature":271},"Resources",{"dataNavLevelOne":186},"resources",{"text":188,"config":189},"View all resources",{"href":190,"dataGaName":186,"dataGaLocation":39},"/resources/",[192,225,243],{"title":193,"items":194},"Getting started",[195,200,205,210,215,220],{"text":196,"config":197},"Install",{"href":198,"dataGaName":199,"dataGaLocation":39},"/install/","install",{"text":201,"config":202},"Quick start guides",{"href":203,"dataGaName":204,"dataGaLocation":39},"/get-started/","quick setup checklists",{"text":206,"config":207},"Learn",{"href":208,"dataGaLocation":39,"dataGaName":209},"https://university.gitlab.com/","learn",{"text":211,"config":212},"Product documentation",{"href":213,"dataGaName":214,"dataGaLocation":39},"https://docs.gitlab.com/","product documentation",{"text":216,"config":217},"Best practice videos",{"href":218,"dataGaName":219,"dataGaLocation":39},"/getting-started-videos/","best practice videos",{"text":221,"config":222},"Integrations",{"href":223,"dataGaName":224,"dataGaLocation":39},"/integrations/","integrations",{"title":226,"items":227},"Discover",[228,233,238],{"text":229,"config":230},"Customer success stories",{"href":231,"dataGaName":232,"dataGaLocation":39},"/customers/","customer success stories",{"text":234,"config":235},"Blog",{"href":236,"dataGaName":237,"dataGaLocation":39},"/blog/","blog",{"text":239,"config":240},"Remote",{"href":241,"dataGaName":242,"dataGaLocation":39},"https://handbook.gitlab.com/handbook/company/culture/all-remote/","remote",{"title":244,"items":245},"Connect",[246,251,256,261,266],{"text":247,"config":248},"GitLab Services",{"href":249,"dataGaName":250,"dataGaLocation":39},"/services/","services",{"text":252,"config":253},"Community",{"href":254,"dataGaName":255,"dataGaLocation":39},"/community/","community",{"text":257,"config":258},"Forum",{"href":259,"dataGaName":260,"dataGaLocation":39},"https://forum.gitlab.com/","forum",{"text":262,"config":263},"Events",{"href":264,"dataGaName":265,"dataGaLocation":39},"/events/","events",{"text":267,"config":268},"Partners",{"href":269,"dataGaName":270,"dataGaLocation":39},"/partners/","partners",{"backgroundColor":272,"textColor":273,"text":274,"image":275,"link":279},"#2f2a6b","#fff","Insights for the future of software development",{"altText":276,"config":277},"the source promo card",{"src":278},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758208064/dzl0dbift9xdizyelkk4.svg",{"text":280,"config":281},"Read the latest",{"href":282,"dataGaName":283,"dataGaLocation":39},"/the-source/","the source",{"text":285,"config":286,"lists":288},"Company",{"dataNavLevelOne":287},"company",[289],{"items":290},[291,296,302,304,309,314,319,324,329,334,339],{"text":292,"config":293},"About",{"href":294,"dataGaName":295,"dataGaLocation":39},"/company/","about",{"text":297,"config":298,"footerGa":301},"Jobs",{"href":299,"dataGaName":300,"dataGaLocation":39},"/jobs/","jobs",{"dataGaName":300},{"text":262,"config":303},{"href":264,"dataGaName":265,"dataGaLocation":39},{"text":305,"config":306},"Leadership",{"href":307,"dataGaName":308,"dataGaLocation":39},"/company/team/e-group/","leadership",{"text":310,"config":311},"Team",{"href":312,"dataGaName":313,"dataGaLocation":39},"/company/team/","team",{"text":315,"config":316},"Handbook",{"href":317,"dataGaName":318,"dataGaLocation":39},"https://handbook.gitlab.com/","handbook",{"text":320,"config":321},"Investor relations",{"href":322,"dataGaName":323,"dataGaLocation":39},"https://ir.gitlab.com/","investor relations",{"text":325,"config":326},"Trust Center",{"href":327,"dataGaName":328,"dataGaLocation":39},"/security/","trust center",{"text":330,"config":331},"AI Transparency Center",{"href":332,"dataGaName":333,"dataGaLocation":39},"/ai-transparency-center/","ai transparency center",{"text":335,"config":336},"Newsletter",{"href":337,"dataGaName":338,"dataGaLocation":39},"/company/contact/#contact-forms","newsletter",{"text":340,"config":341},"Press",{"href":342,"dataGaName":343,"dataGaLocation":39},"/press/","press",{"text":345,"config":346,"lists":347},"Contact us",{"dataNavLevelOne":287},[348],{"items":349},[350,353,358],{"text":46,"config":351},{"href":48,"dataGaName":352,"dataGaLocation":39},"talk to sales",{"text":354,"config":355},"Support portal",{"href":356,"dataGaName":357,"dataGaLocation":39},"https://support.gitlab.com","support portal",{"text":359,"config":360},"Customer portal",{"href":361,"dataGaName":362,"dataGaLocation":39},"https://customers.gitlab.com/customers/sign_in/","customer portal",{"close":364,"login":365,"suggestions":372},"Close",{"text":366,"link":367},"To search repositories and projects, login to",{"text":368,"config":369},"gitlab.com",{"href":53,"dataGaName":370,"dataGaLocation":371},"search login","search",{"text":373,"default":374},"Suggestions",[375,377,381,383,387,391],{"text":68,"config":376},{"href":73,"dataGaName":68,"dataGaLocation":371},{"text":378,"config":379},"Code Suggestions (AI)",{"href":380,"dataGaName":378,"dataGaLocation":371},"/solutions/code-suggestions/",{"text":102,"config":382},{"href":104,"dataGaName":102,"dataGaLocation":371},{"text":384,"config":385},"GitLab on AWS",{"href":386,"dataGaName":384,"dataGaLocation":371},"/partners/technology-partners/aws/",{"text":388,"config":389},"GitLab on Google Cloud",{"href":390,"dataGaName":388,"dataGaLocation":371},"/partners/technology-partners/google-cloud-platform/",{"text":392,"config":393},"Why GitLab?",{"href":81,"dataGaName":392,"dataGaLocation":371},{"freeTrial":395,"mobileIcon":400,"desktopIcon":405,"secondaryButton":408},{"text":396,"config":397},"Start free trial",{"href":398,"dataGaName":44,"dataGaLocation":399},"https://gitlab.com/-/trials/new/","nav",{"altText":401,"config":402},"Gitlab Icon",{"src":403,"dataGaName":404,"dataGaLocation":399},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203874/jypbw1jx72aexsoohd7x.svg","gitlab icon",{"altText":401,"config":406},{"src":407,"dataGaName":404,"dataGaLocation":399},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203875/gs4c8p8opsgvflgkswz9.svg",{"text":409,"config":410},"Get Started",{"href":411,"dataGaName":412,"dataGaLocation":399},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com/get-started/","get started",{"freeTrial":414,"mobileIcon":418,"desktopIcon":420},{"text":415,"config":416},"Learn more about GitLab Duo",{"href":73,"dataGaName":417,"dataGaLocation":399},"gitlab duo",{"altText":401,"config":419},{"src":403,"dataGaName":404,"dataGaLocation":399},{"altText":401,"config":421},{"src":407,"dataGaName":404,"dataGaLocation":399},{"button":423,"mobileIcon":428,"desktopIcon":430},{"text":424,"config":425},"/switch",{"href":426,"dataGaName":427,"dataGaLocation":399},"#contact","switch",{"altText":401,"config":429},{"src":403,"dataGaName":404,"dataGaLocation":399},{"altText":401,"config":431},{"src":432,"dataGaName":404,"dataGaLocation":399},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1773335277/ohhpiuoxoldryzrnhfrh.png",{"freeTrial":434,"mobileIcon":439,"desktopIcon":441},{"text":435,"config":436},"Back to pricing",{"href":181,"dataGaName":437,"dataGaLocation":399,"icon":438},"back to pricing","GoBack",{"altText":401,"config":440},{"src":403,"dataGaName":404,"dataGaLocation":399},{"altText":401,"config":442},{"src":407,"dataGaName":404,"dataGaLocation":399},{"title":444,"button":445,"config":450},"See how agentic AI transforms software delivery",{"text":446,"config":447},"Watch GitLab Transcend now",{"href":448,"dataGaName":449,"dataGaLocation":39},"/events/transcend/virtual/","transcend event",{"layout":451,"icon":452,"disabled":24},"release","AiStar",{"data":454},{"text":455,"source":456,"edit":462,"contribute":467,"config":472,"items":477,"minimal":684},"Git is a trademark of Software Freedom Conservancy and our use of 'GitLab' is under license",{"text":457,"config":458},"View page source",{"href":459,"dataGaName":460,"dataGaLocation":461},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/","page source","footer",{"text":463,"config":464},"Edit this page",{"href":465,"dataGaName":466,"dataGaLocation":461},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/content/","web ide",{"text":468,"config":469},"Please contribute",{"href":470,"dataGaName":471,"dataGaLocation":461},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/CONTRIBUTING.md/","please contribute",{"twitter":473,"facebook":474,"youtube":475,"linkedin":476},"https://twitter.com/gitlab","https://www.facebook.com/gitlab","https://www.youtube.com/channel/UCnMGQ8QHMAnVIsI3xJrihhg","https://www.linkedin.com/company/gitlab-com",[478,525,579,623,650],{"title":179,"links":479,"subMenu":494},[480,484,489],{"text":481,"config":482},"View plans",{"href":181,"dataGaName":483,"dataGaLocation":461},"view plans",{"text":485,"config":486},"Why Premium?",{"href":487,"dataGaName":488,"dataGaLocation":461},"/pricing/premium/","why premium",{"text":490,"config":491},"Why Ultimate?",{"href":492,"dataGaName":493,"dataGaLocation":461},"/pricing/ultimate/","why ultimate",[495],{"title":496,"links":497},"Contact Us",[498,501,503,505,510,515,520],{"text":499,"config":500},"Contact sales",{"href":48,"dataGaName":49,"dataGaLocation":461},{"text":354,"config":502},{"href":356,"dataGaName":357,"dataGaLocation":461},{"text":359,"config":504},{"href":361,"dataGaName":362,"dataGaLocation":461},{"text":506,"config":507},"Status",{"href":508,"dataGaName":509,"dataGaLocation":461},"https://status.gitlab.com/","status",{"text":511,"config":512},"Terms of use",{"href":513,"dataGaName":514,"dataGaLocation":461},"/terms/","terms of use",{"text":516,"config":517},"Privacy statement",{"href":518,"dataGaName":519,"dataGaLocation":461},"/privacy/","privacy statement",{"text":521,"config":522},"Cookie preferences",{"dataGaName":523,"dataGaLocation":461,"id":524,"isOneTrustButton":24},"cookie preferences","ot-sdk-btn",{"title":84,"links":526,"subMenu":535},[527,531],{"text":528,"config":529},"DevSecOps platform",{"href":66,"dataGaName":530,"dataGaLocation":461},"devsecops platform",{"text":532,"config":533},"AI-Assisted Development",{"href":73,"dataGaName":534,"dataGaLocation":461},"ai-assisted development",[536],{"title":537,"links":538},"Topics",[539,544,549,554,559,564,569,574],{"text":540,"config":541},"CICD",{"href":542,"dataGaName":543,"dataGaLocation":461},"/topics/ci-cd/","cicd",{"text":545,"config":546},"GitOps",{"href":547,"dataGaName":548,"dataGaLocation":461},"/topics/gitops/","gitops",{"text":550,"config":551},"DevOps",{"href":552,"dataGaName":553,"dataGaLocation":461},"/topics/devops/","devops",{"text":555,"config":556},"Version Control",{"href":557,"dataGaName":558,"dataGaLocation":461},"/topics/version-control/","version control",{"text":560,"config":561},"DevSecOps",{"href":562,"dataGaName":563,"dataGaLocation":461},"/topics/devsecops/","devsecops",{"text":565,"config":566},"Cloud Native",{"href":567,"dataGaName":568,"dataGaLocation":461},"/topics/cloud-native/","cloud native",{"text":570,"config":571},"AI for Coding",{"href":572,"dataGaName":573,"dataGaLocation":461},"/topics/devops/ai-for-coding/","ai for coding",{"text":575,"config":576},"Agentic AI",{"href":577,"dataGaName":578,"dataGaLocation":461},"/topics/agentic-ai/","agentic ai",{"title":580,"links":581},"Solutions",[582,584,586,591,595,598,602,605,607,610,613,618],{"text":126,"config":583},{"href":121,"dataGaName":126,"dataGaLocation":461},{"text":115,"config":585},{"href":98,"dataGaName":99,"dataGaLocation":461},{"text":587,"config":588},"Agile development",{"href":589,"dataGaName":590,"dataGaLocation":461},"/solutions/agile-delivery/","agile delivery",{"text":592,"config":593},"SCM",{"href":111,"dataGaName":594,"dataGaLocation":461},"source code management",{"text":540,"config":596},{"href":104,"dataGaName":597,"dataGaLocation":461},"continuous integration & delivery",{"text":599,"config":600},"Value stream management",{"href":154,"dataGaName":601,"dataGaLocation":461},"value stream management",{"text":545,"config":603},{"href":604,"dataGaName":548,"dataGaLocation":461},"/solutions/gitops/",{"text":164,"config":606},{"href":166,"dataGaName":167,"dataGaLocation":461},{"text":608,"config":609},"Small business",{"href":171,"dataGaName":172,"dataGaLocation":461},{"text":611,"config":612},"Public sector",{"href":176,"dataGaName":177,"dataGaLocation":461},{"text":614,"config":615},"Education",{"href":616,"dataGaName":617,"dataGaLocation":461},"/solutions/education/","education",{"text":619,"config":620},"Financial services",{"href":621,"dataGaName":622,"dataGaLocation":461},"/solutions/finance/","financial services",{"title":184,"links":624},[625,627,629,631,634,636,638,640,642,644,646,648],{"text":196,"config":626},{"href":198,"dataGaName":199,"dataGaLocation":461},{"text":201,"config":628},{"href":203,"dataGaName":204,"dataGaLocation":461},{"text":206,"config":630},{"href":208,"dataGaName":209,"dataGaLocation":461},{"text":211,"config":632},{"href":213,"dataGaName":633,"dataGaLocation":461},"docs",{"text":234,"config":635},{"href":236,"dataGaName":237,"dataGaLocation":461},{"text":229,"config":637},{"href":231,"dataGaName":232,"dataGaLocation":461},{"text":239,"config":639},{"href":241,"dataGaName":242,"dataGaLocation":461},{"text":247,"config":641},{"href":249,"dataGaName":250,"dataGaLocation":461},{"text":252,"config":643},{"href":254,"dataGaName":255,"dataGaLocation":461},{"text":257,"config":645},{"href":259,"dataGaName":260,"dataGaLocation":461},{"text":262,"config":647},{"href":264,"dataGaName":265,"dataGaLocation":461},{"text":267,"config":649},{"href":269,"dataGaName":270,"dataGaLocation":461},{"title":285,"links":651},[652,654,656,658,660,662,664,668,673,675,677,679],{"text":292,"config":653},{"href":294,"dataGaName":287,"dataGaLocation":461},{"text":297,"config":655},{"href":299,"dataGaName":300,"dataGaLocation":461},{"text":305,"config":657},{"href":307,"dataGaName":308,"dataGaLocation":461},{"text":310,"config":659},{"href":312,"dataGaName":313,"dataGaLocation":461},{"text":315,"config":661},{"href":317,"dataGaName":318,"dataGaLocation":461},{"text":320,"config":663},{"href":322,"dataGaName":323,"dataGaLocation":461},{"text":665,"config":666},"Sustainability",{"href":667,"dataGaName":665,"dataGaLocation":461},"/sustainability/",{"text":669,"config":670},"Diversity, inclusion and belonging (DIB)",{"href":671,"dataGaName":672,"dataGaLocation":461},"/diversity-inclusion-belonging/","Diversity, inclusion and belonging",{"text":325,"config":674},{"href":327,"dataGaName":328,"dataGaLocation":461},{"text":335,"config":676},{"href":337,"dataGaName":338,"dataGaLocation":461},{"text":340,"config":678},{"href":342,"dataGaName":343,"dataGaLocation":461},{"text":680,"config":681},"Modern Slavery Transparency Statement",{"href":682,"dataGaName":683,"dataGaLocation":461},"https://handbook.gitlab.com/handbook/legal/modern-slavery-act-transparency-statement/","modern slavery transparency statement",{"items":685},[686,689,692],{"text":687,"config":688},"Terms",{"href":513,"dataGaName":514,"dataGaLocation":461},{"text":690,"config":691},"Cookies",{"dataGaName":523,"dataGaLocation":461,"id":524,"isOneTrustButton":24},{"text":693,"config":694},"Privacy",{"href":518,"dataGaName":519,"dataGaLocation":461},[696],{"id":697,"title":18,"body":8,"config":698,"content":700,"description":8,"extension":22,"meta":704,"navigation":24,"path":705,"seo":706,"stem":707,"__hash__":708},"blogAuthors/en-us/blog/authors/fabio-akita.yml",{"template":699},"BlogAuthor",{"name":18,"config":701},{"headshot":702,"ctfId":703},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1749659488/Blog/Author%20Headshots/gitlab-logo-extra-whitespace.png","Fabio-Akita",{},"/en-us/blog/authors/fabio-akita",{},"en-us/blog/authors/fabio-akita","DVe2XlXEZZDWye9uAgRZlPzRQwj8bowUAcyuUMr7_i8",[710,723,737],{"content":711,"config":721},{"title":712,"description":713,"authors":714,"heroImage":716,"date":717,"body":718,"category":9,"tags":719},"GitLab AI Hackathon 2026: Meet the winners","Nearly 7,000 developers built 600+ AI agents and flows on GitLab Duo Agent Platform. Find out who won and what they created.",[715],"Nick Veenhof","https://res.cloudinary.com/about-gitlab-com/image/upload/v1776457632/llddiylsgwuze0u1rjks.png","2026-04-22","AI writes code. That is expected now. But planning, security, compliance, and deployments? Those gaps remain. I have run contributor programs for years. I have never seen a community respond to technology like this.\n\nThat is why we opened [GitLab Duo Agent Platform](https://about.gitlab.com/gitlab-duo-agent-platform/) and invited developers worldwide to build AI agents that help teams ship secure software faster. Not chatbots that answer questions, but agents that jump into workflows, respond to events, and act on your behalf. The GitLab AI Hackathon ran from February 9 to March 25, 2026, on Devpost, the hackathon platform. Google Cloud and Anthropic joined as co-sponsors.\n\nWhen my team planned this hackathon with Google Cloud and Anthropic, I asked the judges to score four things: technical work, design, potential impact, and idea quality. We hoped for strong turnout. What we got surprised all of us. Nineteen judges spent 18 days reviewing every entry. Google Cloud and Anthropic provided judges, prizes, and cloud access. The community built hundreds of agents and flows because they wanted to solve these problems.\n\nNearly 7,000 developers showed up. They built 600+ agents and flows in weeks. The prizes across all categories totaled $65,000 from GitLab, Google Cloud, and Anthropic.\n\n\nIf you have ever watched a senior engineer leave and take half the team's knowledge with them, you know why the winning project hit so hard.\n\nRead on to find out what the community built.\n\n## Grand Prize: LORE\n\n[LORE](https://devpost.com/software/lore-living-organizational-record-engine), the Living Organizational Record Engine, uses eight agents with a router that sends each question to the right agent, logic to prevent circular loops in the knowledge graph, a visual dashboard, and carbon tracking. The command-line tool ships with 43 tests (yes, 43 tests in a hackathon project).\n\nLORE solves a real problem: the knowledge that lives in engineers' heads and walks out the door when they leave. In my experience, a hackathon project with 43 tests is rare. That many tests in a hackathon project tells you something about the team behind it.\n\nJudge April Guo (Anthropic) wrote: \"This feels like a product, not a hackathon project.\"\n\n\n### Google Cloud winners\n\n[Gitdefender](https://devpost.com/software/gitdefender) won the Google Cloud Grand Prize. It works inside code review workflows, finding and fixing security issues. It spots the bug, writes the fix, and opens the code review. No developer needs to step in.\n\n[Aegis](https://devpost.com/software/aegis-2m1oq0) won the Google Cloud Runner Up. It gives AI-powered explanations for every decision it makes, deployed to Google Cloud and ready for production use.\n\n### Anthropic winners\n\n[GraphDev](https://devpost.com/software/graphdev) won the Anthropic Grand Prize. It maps code links and shows how systems change over time. Judge Aboobacker MK (GitLab) noted it was \"in sync with our work on GitLab knowledge graph.\" Judge Ayush Billore (GitLab) wrote: \"Loved the demo and UX, super useful for understanding how the system evolved and what gets impacted by changes.\" You can see the full impact of a change before you make it.\n\n[DocSync](https://devpost.com/software/pipeheal) won the Anthropic Runner Up. It uses three agents: Detector, Writer, and Reviewer. If DocSync is confident in the fix, it opens a code review. If not, it creates an issue for a human to check.\n\n## Category winners\n\n### Most Technically Impressive\n\nDatabase migrations break things. [Time-Traveler](https://devpost.com/software/time-traveler-w3cxp0) creates a safe copy of your production setup, runs the migration against that copy, and reports the result. It runs five agents connected by a bridge, with real Google Cloud deployment, real PostgreSQL migrations, and real data.\n\n### Most Impactful\n\n[RedAgent](https://devpost.com/software/redagent) checks AI-generated security reports, closing the trust gap between AI findings and developer action. If your team uses AI for security scanning, you know this problem. I have seen teams dismiss AI findings because they could not verify them. RedAgent gives teams a way to check AI output before it reaches developers.\n\n### Easiest to Use\n\n[Launch Control](https://devpost.com/software/launch-control-bgp8az) delivers polished UX and solid infrastructure, and scored well on sustainability too.\n\n## The sustainability signal\n\nFive projects won prizes or bonuses for environmental impact. Software delivery has a carbon cost as CI/CD pipelines, but now LLMs also run compute at scale. We created the Green Agent category to challenge developers to measure and reduce that footprint. Stacy Cline and Kim Buncle from GitLab's sustainability team helped judge the Green Agent category. \n\n### Green Agent prize\n\n[GreenPipe](https://devpost.com/software/greenpipe) scans CI/CD pipelines for environmental impact and produces carbon footprint reports. Judges Kim Buncle and Rajesh Agadi (Google) both backed the project.\n\n### Sustainable Design bonus\n\nSustainable Design bonuses were awarded to the projects with exceptional sustainability practices in their design, from model optimization techniques to energy-efficient architecture choices.\n\n* [BugFlow](https://devpost.com/software/bugflow-ai-regression-detective-ci-optimizer) turned one bug report into 10 fixes in 20 minutes. \n* [DELTA Cyber Reasoning](https://devpost.com/software/delta-cyber-reasoning-system) is automated fuzz testing for security. \n* [CarbonLint](https://devpost.com/software/carbonlint) applied code analysis to energy use.\n* [TFGuardian](https://devpost.com/software/tfguardian) features a carbon footprint analyzer, among other agents.\n\nCongratulations on all the Sustainable Design bonus winners! \n\nJudge Jens-Joris Decorte (TechWolf) cited the result: Costs dropped from $556 to $18 per month, a 96% carbon cut (that is a $538 monthly saving with a sustainability label on it).\n\n## Honorable mentions and the long tail\n\nSix projects received honorable mentions:\n\n\n- [SecurityMonkey](https://devpost.com/software/securitymonkey) injects known vulnerabilities into a test branch and scores how well your security scanners catch them.\n- [stregent](https://devpost.com/software/stregent) monitors CI/CD pipelines and lets developers investigate and merge fixes from WhatsApp without opening a laptop.\n- [Compliance Sentinel](https://devpost.com/software/compliance-sentinel-autonomous-devsecops-governance) scores every merge request for compliance risk and blocks the merge if critical violations are detected.\n- [Carbon Tracker](https://devpost.com/software/carbon-tracker-ij25kf) calculates the carbon footprint of each CI/CD pipeline job and posts optimization tips on the merge request.\n- [RepoWarden](https://devpost.com/software/docuguard) is the first Living Specification Engine, an AI system that captures why code was written, not just what it does.\n- [MR Compliance Auditor](https://devpost.com/software/mr-compliance-auditor) collects evidence across merge requests, maps it to SOC 2 controls, and streams compliance scores to a live dashboard.\n\nMy favorite quote from the judging came from Luca Chun Lun Lit (Anthropic), who described stregent's mobile-first approach: \"Being able to essentially code from your phone is a next level in the engineering experience.\"\n\n> Explore the 600+ entries in the [project gallery](https://gitlab.devpost.com/project-gallery).\n\n## What comes next\n\nEvery agent in this hackathon worked within a single project. They still delivered impressive results. Some participants ran a local knowledge graph alongside their agents to surface code relationships and dependencies within the repo. LORE captures project history. Gitdefender finds vulnerabilities. Pairing agents with richer local context is already helping contributors build sharper tools. The next hackathon will build on what contributors are already doing with richer context. Sign up on [contributors.gitlab.com](https://contributors.gitlab.com/) to be the first to know when details drop.\n\n\n## Get started\n\nA special thanks to Lee Tickett (GitLab) and Mattias Michaux (GitLab) for orchestrating the orchestrators and innovators behind this hackathon!\n\nThank you to every developer who submitted. Nearly 7,000 of you showed what GitLab Duo Agent Platform can do when a community decides to build. I am proud of what you built here, and I cannot wait to see what you build next.\n\nBuild your own agent on [GitLab Duo Agent Platform](https://docs.gitlab.com/user/duo_agent_platform/). Browse community-built agents in the [AI Catalog](https://docs.gitlab.com/user/duo_agent_platform/ai_catalog/). You orchestrate. AI accelerates.\n",[720,255],"AI/ML",{"featured":12,"template":13,"slug":722},"gitlab-ai-hackathon-2026-meet-the-winners",{"content":724,"config":735},{"title":725,"description":726,"authors":727,"heroImage":729,"date":730,"category":9,"tags":731,"body":734},"What’s new in Git 2.54.0?","Learn about release contributions, including new repository maintenance, a new command to edit commit history, a replacement for git-sizer(1), and more.",[728],"Patrick Steinhardt","https://res.cloudinary.com/about-gitlab-com/image/upload/v1776711651/sj7xxyyuimlarswbyft5.png","2026-04-20",[732,733,255],"open source","git","The Git project recently released [Git 2.54.0](https://lore.kernel.org/git/xmqqa4uxsjrs.fsf@gitster.g/T/#u). Let's look at a few notable highlights from this release, which includes contributions from the Git team at GitLab.\n\n## Pluggable Object Databases\n\nGit already has the ability to store references with either the \"files\" backend or with the [\"reftable\" backend](https://about.gitlab.com/blog/a-beginners-guide-to-the-git-reftable-format/). This is achieved by having proper abstractions in Git that allows us to have different backends.\n\nBut references are just one of the two important types of data that are stored in repositories, with the other being objects. Objects are stored in the object database, and each object database in turn consists of multiple object sources where objects can be read from or written to. Each object source either stores individual objects as so-called \"loose\" objects, or compresses multiple objects into a \"packfile\" in your `.git/objects` directory.\n\nUntil now, however, these sources did not have a proper abstraction boundary, so the storage format for objects is completely hardcoded into Git. But this is finally changing with pluggable object databases! The concept is straightforward and similar to how we did this for references in the past: Instead of having hardcoded code paths for how to store objects, we introduce an abstraction boundary that allows us to have different backends for storing objects.\n\nWhile the idea is simple, the implementation is not, as we have hardcoded assumptions about the storage formats used in Git all over the place. In fact, we have started working on this topic in Git 2.48, which was released in January 2025. Initially, we focused on making object-related subsystems self-contained and creating proper subsystems for the existing backends that we had in Git.\n\nWith Git 2.54, we have now reached a milestone: The object database backend is now pluggable. Not all of Git's functionality is covered yet, but introducing an alternate backend that handles a meaningful subset of operations is now a realistic undertaking.\n\nFor now, only local workflows like creating commits, showing commit graphs, or performing merges will work with such an alternative implementation. This notably excludes anything that interacts with a remote, such as when you want to fetch or push changes. Regardless, this is the culmination of almost two years of work spanning across almost 400 commits that have been merged upstream, and we will of course continue to iterate on this effort.\n\nSo why does this matter? The idea is that it becomes practical to introduce new storage formats into Git. Examples could be:\n- A storage format that is able to store large binary files more efficiently\n  than packfiles do today\n\n- A storage format that is custom-tailored for GitLab to ensure that we can\n  serve repositories to our users even more efficiently than we currently can\n\n\nThis is a large-scale effort that is likely to shape the future of Git and GitLab.\n\n*This project was led by [Patrick Steinhardt](https://gitlab.com/pks-gitlab).*\n\n## Easier editing of your commit history\n\nIn many software development projects it is common practice for developers to not only polish the code they want to contribute, but to also polish the commit history so that it becomes easy to review. The result is a set of small and atomic commits that each do one thing, with a good commit message that describes the intent of the commit as well as specific nuances.\n\nOf course, more often than not, these atomic commits are not something that just happens naturally during the development process. Instead, the author of the changes will gain a better understanding of what they are while iterating on them, and the way to split up the commits will become clearer over time. Furthermore, the subsequent review process may result in feedback that requires changes to the crafted commits.\n\nThe consequence of this process is that the developer will have to rewrite their commit history many times during the development process. Historically, Git has allowed for this use case via [interactive rebases](https://git-scm.com/docs/git-rebase#_interactive_mode). These interactive rebases are an extremely powerful tool: They let you reorder commits, rewrite commit messages, squash multiple commits together, or perform arbitrary edits of any commit.\n\nBut they are also somewhat arcane and hard to understand. The user needs to figure out the base commit for the rebase, they need to understand how to edit a somewhat obscure \"instruction sheet,\" and they need to be aware of how the stateful rebasing process works. For example, users are presented with an instruction sheet similar to the following when rebasing a topic branch:\n\n```shell\npick b60623f382 # t: detect errors outside of test cases # empty\npick b80cb55882 # t: prepare `test_match_signal ()` calls for `set -e`\npick 5ffe397f30 # t: prepare `test_must_fail ()` for `set -e`\npick 5e9b0cf5e1 # t: prepare `stop_git_daemon ()` for `set -e`\npick 299561e7a2 # t: prepare `git config --unset` calls for `set -e`\npick ed0e7ca2b5 # t: detect errors outside of test cases\n```\n\nSo while interactive rebases are powerful, they are also quite intimidating for the average user.\n\nIt doesn't have to be this way, though. Tools like [Jujutsu](https://www.jj-vcs.dev/latest/) provide interfaces that are much easier to use compared to Git, as you can for example simply execute `jj split` to split up a commit into two commits. With Git and interactive rebases, this use case requires a lot of different steps with confusing command line arguments.\n\nWe have thus taken inspiration from Jujutsu and have introduced a new git-history(1) command into Git that is the foundation for better history editing. For now, this command has two subcommands:\n\n- `git history reword` allows you to easily rewrite a commit message. You simply\n  give it the commit whose message you want to reword, Git asks you for the new\n  commit message, and that's it.\n\n- `git history split` allows you to split up a commit into two, which is\n  inspired by `jj split`. You give it a commit, Git asks you which changes to\n  stage into which commit and for the two commit messages, and then you're done.\n\n\nThis is of course only a start, and we want to add additional subcommands over time. For example:\n\n- `git history fixup` to take staged changes and automatically amend them to a\n  specific commit\n\n- `git history drop` to remove a commit\n- `git history reorder` to reorder the sequence of commits\n- `git history squash` to squash a range of commits\n\nBut that's not all! In addition to making history editing easy, this new command also knows to automatically rebase all of your local branches that previously included this commit. So that means that you can even edit a commit that is not on the current branch, and all branches that contain the commit will be rewritten.\n\nIt may seem puzzling at first that Git is automatically rebasing dependent branches, as that is a significant diversion from how git-rebase(1) works. But this is part of a bigger effort to bring better support for Stacked Diffs to Git, which are a way to create a series of multiple dependent branches that can be reviewed independently, but that together work towards a bigger goal.\n\n*This project was led by [Patrick Steinhardt](https://gitlab.com/pks-gitlab) with support from [Elijah Newren](https://github.com/newren).*\n\n## A native replacement for git-sizer(1)\n\nThe size of a Git repository is an important factor that determines how well Git and GitLab can handle it. But size alone is not the only factor, as the performance of a repository is ultimately a combination of multiple different dimensions:\n\n- The depth of the commit history\n- The shape of the directory structure\n- The size of files stored in the repository\n- The number of references\n\nThese are only some of the dimensions one needs to consider when trying to predict whether Git will be able to handle a repository well.\n\nBut while it is clear that the mere repository size is insufficient, Git itself does not provide any tooling that gives the user an easy overview of these metrics. Instead, users are forced to rely on third-party tools like [git-sizer(1)](https://github.com/github/git-sizer) to fill this gap. This tool does an excellent job at surfacing this information, but it is not part of Git itself and thus needs to be installed separately.\n\nObservability of repository internals is critical to us at GitLab, so we introduced a [new `git repo structure` command into Git 2.52](https://about.gitlab.com/blog/whats-new-in-git-2-52-0/#new-subcommand-for-git-repo1-to-display-repository-metrics) to display repository metrics, which we have extended in Git 2.53 to [show inflated and disk sizes for objects by type](https://about.gitlab.com/blog/whats-new-in-git-2-53-0/#more-data-collected-in-git-repo-structure).\n\nIn Git 2.54, we are now iterating some more on this command so that we don't only show the overall size, but also show the largest objects by type:\n\n```shell\n$ git clone https://gitlab.com/git-scm/git.git\n$ cd git\n$ git repo structure\nCounting objects: 410445, done.\n| Repository structure      | Value       |\n| ------------------------- | ----------- |\n| * References              |             |\n|   * Count                 |    1.01 k   |\n|     * Branches            |       1     |\n|     * Tags                |    1.00 k   |\n|     * Remotes             |       9     |\n|     * Others              |       0     |\n|                           |             |\n| * Reachable objects       |             |\n|   * Count                 |  410.45 k   |\n|     * Commits             |   83.99 k   |\n|     * Trees               |  164.46 k   |\n|     * Blobs               |  161.00 k   |\n|     * Tags                |    1.00 k   |\n|   * Inflated size         |    7.46 GiB |\n|     * Commits             |   57.53 MiB |\n|     * Trees               |    2.33 GiB |\n|     * Blobs               |    5.07 GiB |\n|     * Tags                |  737.48 KiB |\n|   * Disk size             |  181.37 MiB |\n|     * Commits             |   33.11 MiB |\n|     * Trees               |   40.58 MiB |\n|     * Blobs               |  107.11 MiB |\n|     * Tags                |  582.67 KiB |\n|                           |             |\n| * Largest objects         |             |\n|   * Commits               |             |\n|     * Maximum size    [1] |   17.23 KiB |\n|     * Maximum parents [2] |      10     |\n|   * Trees                 |             |\n|     * Maximum size    [3] |   58.85 KiB |\n|     * Maximum entries [4] |    1.18 k   |\n|   * Blobs                 |             |\n|     * Maximum size    [5] | 1019.51 KiB |\n|   * Tags                  |             |\n\n|     * Maximum size    [6] |    7.13 KiB |\n\n[1] f6ecb603ff8af608a417d7724727d6bc3a9dbfdf\n[2] 16d7601e176cd53f3c2f02367698d06b85e08879\n[3] 203ee97047731b9fd3ad220faa607b6677861a0d\n[4] 203ee97047731b9fd3ad220faa607b6677861a0d\n[5] aa96f8bc361fd84a1459440f1e7de02ab0dc3543\n[6] 07e38db6a5a03690034d27104401f6c8ea40f1fc\n```\n\nWith this information we're now almost feature-complete as compared to git-sizer(1). We're not done yet, though — we plan to eventually add additional features such as:\n\n- Severity levels as they exist in git-sizer(1)\n- Graphs that show you the distribution of object sizes\n- The ability to scan objects reachable via a subset of references\n\n*This project was led by [Justin Tobler](https://gitlab.com/justintobler).*\n\n## New infrastructure for repository maintenance\n\nWhenever you write data into a Git repository you will typically end up adding more loose objects. Left unmanaged, this leads to a large number of separate files in your `.git/objects/` directory, which slows down several operations that want to access many objects at once. Git thus regularly packs these objects into \"packfiles\" to ensure good performance.\n\nThis isn't the only data structure that may become inefficient over time: Updating references may create loose references, reflogs will need trimming, worktrees may become stale, and caches like commit-graphs need to be refreshed regularly.\n\nAll of these tasks have historically been managed by [git-gc(1)](https://git-scm.com/docs/git-gc). However, this tool has a monolithic architecture, where it basically executes all of the tasks required in sequential order. This foundation is hard to extend and doesn't give the end user much flexibility in case they want to slightly modify how housekeeping is performed.\n\nThe Git project introduced the new [git-maintenance(1)](https://git-scm.com/docs/git-maintenance) tool in Git 2.29. In contrast to git-gc(1), git-maintenance(1) is not monolithic but is instead structured around tasks. These tasks are freely configurable by the user so that the user can control which tasks are running, giving them much more fine-grained control over repository maintenance.\n\nEventually, Git has migrated to use git-maintenance(1) by default. But in the beginning, the only task that was default-enabled was the git-gc(1) task, which as you might have guessed, simply executes `git gc`. To manually run maintenance using this new command you can execute `git maintenance run`, but Git knows to execute this automatically after several other commands.\n\nOver the last couple releases we have implemented all the individual tasks that are supported by git-gc(1) in git-maintenance(1) to ensure that we have feature parity between these two tools.\n\nFurthermore, we have implemented a new task that uses Git's modern architecture for repacking objects with [geometric compaction](https://git-scm.com/docs/git-repack#Documentation/git-repack.txt---geometricfactor).\nGeometric compaction is a much better fit for large monorepos, and with our efforts to make them work well with partial clones [that landed in Git 2.53](https://about.gitlab.com/blog/whats-new-in-git-2-53-0/#geometric-repacking-support-with-promisor-remotes) they are now a full replacement for our previous repacking strategy in Git.\n\nIn Git 2.54, we have now reached another significant milestone: Instead of using the git-gc(1)-based strategy by default, we are now using geometric repacking with fine-grained individual maintenance tasks! Besides being more efficient for large monorepos, it also ensures that we have an easier foundation to iterate on going forward.\n\n*The git-maintenance(1) infrastructure was originally implemented by [Derrick Stolee](https://github.com/derrickstolee) and geometric maintenance was introduced by [Taylor Blau](https://github.com/ttaylorr). The effort to introduce the new fine-grained tasks and migrate to the new maintenance strategy was led by [Patrick Steinhardt](https://gitlab.com/pks-gitlab).*\n\n## Read more\n\nThis article highlighted just a few of the contributions made by GitLab and the wider Git community for this latest release. You can learn about these from the [official release announcement](https://lore.kernel.org/git/xmqqa4uxsjrs.fsf@gitster.g/T/#u) of the Git project. Also, check out our [previous Git release blog posts](https://about.gitlab.com/blog/tags/git/) to see other past highlights of contributions from GitLab team members.",{"slug":736,"featured":12,"template":13},"whats-new-in-git-2-54-0",{"content":738,"config":747},{"title":739,"description":740,"authors":741,"date":743,"body":744,"heroImage":745,"category":9,"tags":746},"What’s new in Git 2.53.0?","Learn about release contributions, including fixes for geometric repacking, updates to git-fast-import(1) commit signature handing options, and more.",[742],"Justin Tobler","2026-02-02","The Git project recently released [Git 2.53.0](https://lore.kernel.org/git/xmqq4inz13e3.fsf@gitster.g/T/#u). Let's look at a few notable highlights from this release, which includes\ncontributions from the Git team at GitLab.\n\n## Geometric repacking support with promisor remotes\n\nNewly written objects in a Git repository are often stored as individual loose files. To ensure good performance and optimal use of disk space, these loose objects are regularly compressed into so-called packfiles. The number of packfiles in a repository grows over time as a result of the user’s activities, like writing new commits or fetching from a remote. As the number of packfiles in a repository increases, Git has to do more work to look up individual objects. Therefore, to preserve optimal repository performance, packfiles are periodically repacked via git-repack(1) to consolidate the objects into fewer packfiles. When repacking there are two strategies: “all-into-one” and “geometric”.\n\nThe all-into-one strategy is fairly straightforward and the current default. As its name implies, all objects in the repository are packed into a single packfile. From a performance perspective this is great for the repository as Git only has to scan through a single packfile when looking up objects. The main downside of such a repacking strategy is that computing a single packfile for a repository can take a significant amount of time for large repositories.\n\nThe geometric strategy helps mitigate this concern by maintaining a geometric progression of packfiles based on their size instead of always repacking into a single packfile. To explain more plainly, when repacking Git maintains a set of packfiles ordered by size where each packfile in the sequence is expected to be at least twice the size of the preceding packfile. If a packfile in the sequence violates this property, packfiles are combined as needed until the progression is restored. This strategy has the advantage of still minimizing the number of packfiles in a repository while also minimizing the amount of work that must be done for most repacking operations.\n\nOne problem with the geometric repacking strategy was that it was not compatible with partial clones. Partial clones allow the user to clone only parts of a repository by, for example, skipping all blobs larger than 1 megabyte. This can significantly reduce the size of a repository, and Git knows how to backfill missing objects that it needs to access at a later point in time.\n\nThe result is a repository that is missing some objects, and any object that may not be fully connected is stored in a “promisor” packfile.  When repacking, this promisor property needs to be retained going forward for packfiles containing a promisor object so it is known whether a missing object is expected and can be backfilled from the promisor remote. With an all-into-one repack, Git knows how to handle promisor objects properly and stores them in a separate promisor packfile. Unfortunately, the geometric repacking strategy did not know to give special treatment to promisor packfiles and instead would merge them with normal packfiles without considering whether they reference promisor objects. Luckily, due to a bug the underlying git-pack-objects(1) dies when using geometric repacking in a partial clone repository. So this means repositories in this configuration were not able to be repacked anyways which isn’t great, but better than repository corruption.\n\nWith the release of Git 2.53, geometric repacking now works with partial clone repositories. When performing a geometric repack, promisor packfiles are handled separately in order to preserve the promisor marker and repacked following a separate geometric progression. With this fix, the geometric strategy moves closer towards becoming the default repacking strategy. For more information check out the corresponding [mailing list thread](https://lore.kernel.org/git/20260105-pks-geometric-repack-with-promisors-v1-0-c4660573437e@pks.im/).\n\nThis project was led by [Patrick Steinhardt](https://gitlab.com/pks-gitlab).\n\n## git-fast-import(1) learned to preserve only valid signatures\n\nIn our [Git 2.52 release article](https://about.gitlab.com/blog/whats-new-in-git-2-52-0/), we covered signature related improvements to git-fast-import(1) and git-fast-export(1). Be sure to check out that post for a more detailed explanation of these commands, how they are used, and the changes being made with regards to signatures.\n\nTo quickly recap, git-fast-import(1) provides a backend to efficiently import data into a repository and is used by tools such as [git-filter-repo(1)](https://github.com/newren/git-filter-repo) to help rewrite the history of a repository in bulk. In the Git 2.52 release, git-fast-import(1) learned the `--signed-commits=\u003Cmode>` option similar to the same option in git-fast-export(1). With this option, it became possible to unconditionally retain or strip signatures from commits/tags.\n\nIn situations where only part of the repository history has been rewritten, any signature for rewritten commits/tags becomes invalid. This means git-fast-import(1) is limited to either stripping all signatures or keeping all signatures even if they have become invalid. But retaining invalid signatures doesn’t make much sense, so rewriting history with git-repo-filter(1) results in all signatures being stripped, even if the underlying commit/tag is not rewritten. This is unfortunate because if the commit/tag is unchanged, its signature is still valid and thus there is no real reason to strip it. What is really needed is a means to preserve signatures for unchanged objects, but strip invalid ones.\n\nWith the release of Git 2.53, the git-fast-import(1) `--signed-commits=\u003Cmode>` option has learned a new `strip-if-invalid` mode which, when specified, only strips signatures from commits that become invalid due to being rewritten. Thus, with this option it becomes possible to preserve some commit signatures when using git-fast-import(1). This is a critical step towards providing the foundation for tools like git-repo-filter(1) to preserve valid signatures and eventually re-sign invalid signatures.\n\nThis project was led by [Christian Couder](https://gitlab.com/chriscool).\n\n## More data collected in git-repo-structure\n\nIn the Git 2.52 release, the “structure” subcommand was introduced to git-repo(1). The intent of this command was to collect information about the repository and eventually become a native replacement for tools such as [git-sizer(1)](https://github.com/github/git-sizer). At GitLab, we host some extremely large repositories, and having insight into the general structure of a repository is critical to understand its performance characteristics. In this release, the command now also collects total size information for reachable objects in a repository to help understand the overall size of the repository. In the output below, you can see the command now collects both the total inflated and disk sizes of reachable objects by object type.\n\n```shell\n$ git repo structure\n\n| Repository structure | Value      |\n| -------------------- | ---------- |\n| * References         |            |\n|   * Count            |   1.78 k   |\n|     * Branches       |      5     |\n|     * Tags           |   1.03 k   |\n|     * Remotes        |    749     |\n|     * Others         |      0     |\n|                      |            |\n| * Reachable objects  |            |\n|   * Count            | 421.37 k   |\n|     * Commits        |  88.03 k   |\n|     * Trees          | 169.95 k   |\n|     * Blobs          | 162.40 k   |\n|     * Tags           |    994     |\n|   * Inflated size    |   7.61 GiB |\n|     * Commits        |  60.95 MiB |\n|     * Trees          |   2.44 GiB |\n|     * Blobs          |   5.11 GiB |\n|     * Tags           | 731.73 KiB |\n|   * Disk size        | 301.50 MiB |\n|     * Commits        |  33.57 MiB |\n|     * Trees          |  77.92 MiB |\n|     * Blobs          | 189.44 MiB |\n|     * Tags           | 578.13 KiB |\n```\n\nThe keen-eyed among you may have also noticed that the size values in the table output are also now listed in a more human-friendly manner with units appended. In subsequent releases we hope to further expand this command's output to provide additional data points such as the largest individual objects in the repository.\n\nThis project was led by [Justin Tobler](https://gitlab.com/justintobler).\n\n## Read more\n\nThis article highlighted just a few of the contributions made by GitLab and\nthe wider Git community for this latest release. You can learn about these from\nthe [official release announcement](https://lore.kernel.org/git/xmqq4inz13e3.fsf@gitster.g/T/#u) of the Git project. Also, check\nout our [previous Git release blog posts](https://about.gitlab.com/blog/tags/git/)\nto see other past highlights of contributions from GitLab team members.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749663087/Blog/Hero%20Images/git3-cover.png",[732,733,255],{"featured":24,"template":13,"slug":748},"whats-new-in-git-2-53-0",{"promotions":750},[751,765,777,789],{"id":752,"categories":753,"header":755,"text":756,"button":757,"image":762},"ai-modernization",[754],"ai-ml","Is AI achieving its promise at scale?","Quiz will take 5 minutes or less",{"text":758,"config":759},"Get your AI maturity score",{"href":760,"dataGaName":761,"dataGaLocation":237},"/assessments/ai-modernization-assessment/","modernization assessment",{"config":763},{"src":764},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138786/qix0m7kwnd8x2fh1zq49.png",{"id":766,"categories":767,"header":769,"text":756,"button":770,"image":774},"devops-modernization",[768,563],"product","Are you just managing tools or shipping innovation?",{"text":771,"config":772},"Get your DevOps maturity score",{"href":773,"dataGaName":761,"dataGaLocation":237},"/assessments/devops-modernization-assessment/",{"config":775},{"src":776},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138785/eg818fmakweyuznttgid.png",{"id":778,"categories":779,"header":781,"text":756,"button":782,"image":786},"security-modernization",[780],"security","Are you trading speed for security?",{"text":783,"config":784},"Get your security maturity score",{"href":785,"dataGaName":761,"dataGaLocation":237},"/assessments/security-modernization-assessment/",{"config":787},{"src":788},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138786/p4pbqd9nnjejg5ds6mdk.png",{"id":790,"paths":791,"header":794,"text":795,"button":796,"image":801},"github-azure-migration",[792,793],"migration-from-azure-devops-to-gitlab","integrating-azure-devops-scm-and-gitlab","Is your team ready for GitHub's Azure move?","GitHub is already rebuilding around Azure. Find out what it means for you.",{"text":797,"config":798},"See how GitLab compares to GitHub",{"href":799,"dataGaName":800,"dataGaLocation":237},"/compare/gitlab-vs-github/github-azure-migration/","github azure migration",{"config":802},{"src":776},{"header":804,"blurb":805,"button":806,"secondaryButton":811},"Start building faster today","See what your team can do with the intelligent orchestration platform for DevSecOps.\n",{"text":807,"config":808},"Get your free trial",{"href":809,"dataGaName":44,"dataGaLocation":810},"https://gitlab.com/-/trial_registrations/new?glm_content=default-saas-trial&glm_source=about.gitlab.com/","feature",{"text":499,"config":812},{"href":48,"dataGaName":49,"dataGaLocation":810},1777313724834]