1. If you're looking for help-related things (for example, the key rebinding tutorial), please check the FAQ and Q&A forum! A lot of the stickies from this forum have been moved there to clean up space.
    Dismiss Notice

Starbound should adopt Linux-like Merge Cycles to get rid of lag in development

Discussion in 'Starbound Discussion' started by ThaOneDon, Apr 19, 2014.

  1. ThaOneDon

    ThaOneDon Parsec Taste Tester

    BASICLY
    *Make a Merge Window - pulling everything new in thats stable enough in a few weeks.
    *Close the Merge Window for Major Stuff and accept bug fixes only - patch rate will slow down. Do this for a few weeks. Move it to Unstable.
    *Close that too and now only accept regression fixes.
    *Move it to Stable once regressions are dealt with.

    Whole "Cycle" shouldn't take more than a month

    In Detail:
    Back in the 90s Linus Torvalds struggled with a way to fix lagging development and develop a way to still keep "rapid" improvements in Linux Kernel. This is what he and his team came up with:
    https://www.kernel.org/doc/Documentation/development-process/

    They use it to this day, i don't think they haven't changed much of it.

    Important Stuff from it:
    "Linux kernel development in the early 1990's was a pretty loose affair,
    with relatively small numbers of users and developers involved. With a
    user base in the millions and with some 2,000 developers involved over the
    course of one year, the kernel has since had to evolve a number of
    processes to keep development happening smoothly. A solid understanding of
    how the process works is required in order to be an effective part of it."

    "At the beginning of each development
    cycle, the "merge window" is said to be open. At that time, code which is
    deemed to be sufficiently stable (and which is accepted by the development
    community) is merged into the mainline kernel"

    "The merge window lasts for approximately two weeks. At the end of this
    time, Linus Torvalds will declare that the window is closed and release the
    first of the "rc" kernels."

    "Over the next six to ten weeks, only patches which fix problems should be
    submitted to the mainline."

    "As fixes make their way into the mainline, the patch rate will slow over
    time."

    "How do the developers decide when to close the development cycle and create
    the stable release? The most significant metric used is the list of
    regressions from previous releases. No bugs are welcome, but those which
    break systems which worked in the past are considered to be especially
    serious. For this reason, patches which cause regressions are looked upon
    unfavorably and are quite likely to be reverted during the stabilization
    period."

    "The developers' goal is to fix all known regressions before the stable
    release is made. In the real world, this kind of perfection is hard to
    achieve; there are just too many variables in a project of this size.
    There comes a point where delaying the final release just makes the problem
    worse; the pile of changes waiting for the next merge window will grow
    larger, creating even more regressions the next time around."

    Once a stable release is made, its ongoing maintenance is passed off to the
    "stable team".
     
    Last edited: Apr 19, 2014
  2. kenata

    kenata Starship Captain

    It is not completely clear that you have a deep understanding of modern development processes or how such processes might be used by a particular team. For a huge group of unrelated/non-communicative development teams working in tandem, the OP system might be useful. For a smaller team like the one employed by Chuckle Fish, this development process is easily outperformed by Agile/Scrum or any other fast paced iterative process.

    Frankly, it would be unwise for a game development team to look to a distributed open-source operating system development process for suggestions. With an engineering team of less than 10 people, merging is typically not a significant problem. Most modern source code repositories can gracefully handle most small merge conflicts. These are the most typical with smaller teams given that the developers can spread tasks in such a way to minimize the potential for more significant merge conflicts. The "Linux Merge" model you are proposing is unnecessary and probably out of line with this type of game development.
     
    Serenity, CaptThad, neurogeo and 2 others like this.
  3. ThaOneDon

    ThaOneDon Parsec Taste Tester

    Apparently it is, cause entire development has slowed down to a crawl in my opinion. Lots of fixes are introduced to Unstable and they "still" haven't stabilized them enough to be included in Stable. I've mentioned before to em to not to stray away from Stable too much. This model guarantees they never will. Also it may be a small team but developers are pulling stuff from mods, id say its almost like pulling patches with hundreds of modders involved.

    Also this model works with teams of "any" size, linux kernel started with 1 guy. Torvalds. It kept growing adding more developers in the process.

    In the end, every game is a program that can do graphics.
     
    Last edited: Apr 19, 2014
  4. kenata

    kenata Starship Captain

    Again, you don't really seem to understand how this type of development works. It is not necessary to maintain stable/unstable branches the way you are suggesting. Such a path would add significant overheads with almost no real world benefits. Though I am not certain of the process used at Chuckle Fish, the industry standard is basically the following:

    • The use of a single stable branch used for release candidates and long scale QA passes
    • An unstable development branch used for shorter burst feature testing
    • A large multitude of individual "feature"/development branches used for review. Using this model,
    Thus, the entire dev process boils down to a very simple process.
    1. Create a feature branch for new development to minimize conflicts and bugs caused in other systems
    2. As developers finish new features, the Feature branches are reviewed and relevant working code is brought into the unstable branch
    3. QA regularly tests the unstable branches for any known issues.
    4. Once QA signs off on the new feature ( ie completes some successful test pass ), the relevant code might be brought down into the stable branch as part of the next RC along with any relevant additional testing.
    5. Upon final validations, the RC is pushed into production.
    The vast majority of the relevant merging is handled by the source code repositories, as this process attempts to minimize the size or scope of potential conflicts. There are some good articles and blog posts about these types of models. Given that Chuckle Fish is in the process of setting up a new office and moving a huge number of people, it is not surprising that they are not pushing unvalidated code into "stable" production.
     
    Last edited: Apr 19, 2014
  5. ThaOneDon

    ThaOneDon Parsec Taste Tester

    Could you explain to me what are these overheads are cause all im seeying is mess being replaced with one streamlined and stable system.

    Having hundreds different branches is wasteful and excessively clustered to hell.

    This is "the" thing Torvalds was trying to avoid.

    Also it sounds like its the other way around, before you said only 10 people are developing this game now you're talking about how to manage giant development teams.
     
    Last edited: Apr 19, 2014
  6. kenata

    kenata Starship Captain

    Let's add this up. "A few weeks" for the window and "A few more weeks" for the close and some undisclosed amount of time for regression. At 6-10 weeks for "patches", you are talking about a 3-6 months of development time per cycle. For a team the size of Chuckle Fish. The time frames are probably more like 2-3 weeks.

    Your scales are just wildly off. The problem with hundreds of branches is in maintaining them, not creating them. Most feature branches are created, used, and then thrown away after the merge. Some teams prefer individual repositories which are regularly merge to the current state after the review.
     
  7. ThaOneDon

    ThaOneDon Parsec Taste Tester

    Patches come and go as development chugs along, "few weeks" was just what i said, for a smaller team it could have smaller timeline. Merge Windows should be clear and set in stone so we can always get updates regularly even the little ones.

    I don't see why that has to be thrown away. Everybody can keep their branches. What gets released/merged should be in a system thou thats very simple so everyone can build their own stuff around it.

    What i recommended is the simplest one that guarantees regular updates, i could find.
     
  8. votgs

    votgs Scruffy Nerf-Herder


    ...do you do development for the government or something? Because I -swear- I saw all your reasoning during development cycles for small scope software that really only needed less than twenty people looking at it......and that reasoning indicated we should apply HUGE scope, bureaucrat style overhead to something that didn't need it AT ALL, but it gave some engineer or middle manager a sweet chub.

    Sweet Jesus, it's like backpacking a international corporation style structure on a mom and pop convenience store. Theoretically, it could work. Practically, it makes no damn sense at all.
     
  9. ThaOneDon

    ThaOneDon Parsec Taste Tester

    Well Linux kernel is developed this way. Its not an illusion, developed by government or by corporation.
    That project started as a small team and grew over time.
    Better do it early so once were big and we get all the problems we can deal with those easily.
     
    Last edited: Apr 19, 2014
  10. kenata

    kenata Starship Captain

    You can't compare the two. The linux kernel is being developed in unison by many teams across the world. Some of these teams are close to the size of the entire Chuckle Fish staff. Linux is used across most industries as a standard OS for servers and has become a standard OS for mobile devices. Given this, The requirements of stability dwarf those required of any game development firm by several orders of magnitude. Consider that people might die or companies collapse if particular linux servers crash.
     
  11. ThaOneDon

    ThaOneDon Parsec Taste Tester

    Never heard of linux "crashing", its been rock solid. Instabilities are almost always caused by package breaking that has nothing to do with kernel usually.
     
  12. SugarShow

    SugarShow Scruffy Nerf-Herder

    Hope one day can play Starbound without any lag like terraria.
     
  13. ThaOneDon

    ThaOneDon Parsec Taste Tester

    Thats what we all want
     
  14. Litagano Motscoud

    Litagano Motscoud Master Astronaut

    That's because the team is busy relocating to an office in the UK
     
    Serenity likes this.

Share This Page