John Summerfield wrote:
Nobody should have the ability to update code owned by the next stage.
That's not possible with most version control systems. Everyone has
It's essential. You don't want everyone to be able to mess with
production code.
I meant that no one ever changes anything that has ever been
committed. Everyone makes changes in their own workspace and a commit
becomes a new revision. Anyone can check out any revision that has
ever been committed. So, each stage checks out their own appropriate
revision or tagged copy based on the workflow regardless of what else
is happening in the repository. It doesn't matter that someone can
check in garbage, what matters is that the garbage revision not the
one that QA tests/approves/tags to go to production.
How do you propose minimising the possibility of someone of ill intent
making unauthorised changes?
With revision control systems, you always have access to all versions
and the ability to see the differences between them and who made the
changes (most useful with text/source). If something is important, I'd
expect someone to review the changes as well as performing functional
tests on any generated programs.
Think what DoD, any big bank, Qantas, Westfield or any other significant
business would expect?
Don't they outsource everything these days?
You've got unix filesystem permissions and SELinux at your disposal to
control direct repository access. And the repository doesn't have to
be on the same machine as any of the users.
Unix is weak. selinux is cumbersome.
Compared to? How could you tell if something else is better?
If you don't trust your file access control, these don't matter much.
Nobody should trust anything they're not forced to: that's what
Microsoft means when it talks of "trusted computing."
Why trust the people supplying something they happen to call "trusted"?
--
Les Mikesell
lesmikesell@xxxxxxxxx
--
fedora-list mailing list
fedora-list@xxxxxxxxxx
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list