Indeed
Linkedin
Harris Stowe State University
Veteran's United

Acronyms

TFS Azure DevOps Server
I have experience with this.
SSRS SQL Server Reporting Services
I have NO experience with this.
WPF Windows Presentation Foundation A UI framework for Windows-based desktop applications.
I have some XAML experience.
DB2 A family of data management products, including database servers, developed by IBM It supports the relational model, object–relational features, and non-relational structures like JSON and XML.
I have experience with Microsoft SQL Server and PostgreSQL.
ADO.NET ADO.NET includes .NET Framework data providers for connecting to a database, executing commands, and retrieving results
I have experience with Dapper and Entity-Framework Core.
WCF Windows Communication Foundation WCF is a framework for building service-oriented applications. Using WCF, you can send data as asynchronous messages from one service endpoint to another.
I have NO experience with this

Design

Unit Testing

?

Development

Agile Software Development

?

Test Driven Development

Test-driven development (TDD) is a way of writing code that involves writing an automated unit-level test case that fails, then writing just enough code to make the test pass, then refactoring both the test code and the production code, then repeating with another new test case.

Coding Cycle

The TDD steps vary somewhat by author in count and description, but are generally as follows:

  1. List scenarios for the new feature
    • List the expected variants in the new behavior. "There's the basic case & then what-if this service times out & what-if the key isn't in the database yet &…" The developer can discover these specifications by asking about use cases and user stories. A key benefit of TDD is that it makes the developer focus on requirements before writing code. This is in contrast with the usual practice, where unit tests are only written after code.
  2. Write a test for an item on the list
    • Write an automated test that would pass if the variant in the new behavior is met.
  3. Run all tests. The new test should fail – for expected reasons
    • This shows that new code is actually needed for the desired feature. It validates that the test harness is working correctly. It rules out the possibility that the new test is flawed and will always pass.
  4. Write the simplest code that passes the new test
    • Inelegant code and hard coding is acceptable. The code will be honed in Step 6. No code should be added beyond the tested functionality.
  5. All tests should now pass
    • If any fail, fix failing tests with minimal changes until all pass.
  6. Refactor as needed while ensuring all tests continue to pass
    • Code is refactored for readability and maintainability. In particular, hard-coded test data should be removed from the production code. Running the test suite after each refactor ensures that no existing functionality is broken.
    • Examples of refactoring:
      • moving code to where it most logically belongs
      • removing duplicate code
      • making names self-documenting
      • splitting methods into smaller pieces
      • re-arranging inheritance hierarchies

Each tests should be small and commits made often. If new code fails some tests, the programmer can undo or revert rather than debug excessively.

When using external libraries, it is important not to write tests that are so small as to effectively test merely the library itself, unless there is some reason to believe that the library is buggy or not feature-rich enough to serve all the needs of the software under development.

Best practices

Test structure

Effective layout of a test case ensures all required actions are completed, improves the readability of the test case, and smooths the flow of execution. Consistent structure helps in building a self-documenting test case. A commonly applied structure for test cases has (1) setup, (2) execution, (3) validation, and (4) cleanup.

Setup
Put the Unit Under Test (UUT) or the overall test system in the state needed to run the test.
Execution
Trigger/drive the UUT to perform the target behavior and capture all output, such as return values and output parameters. This step is usually very simple.
Validation
Ensure the results of the test are correct. These results may include explicit outputs captured during execution or state changes in the UUT.
Cleanup
Restore the UUT or the overall test system to the pre-test state. This restoration permits another test to execute immediately after this one. In some cases, in order to preserve the information for possible test failure analysis, the cleanup should be starting the test just before the test's setup run.

Individual best practices

Some best practices that an individual could follow would be to separate common set-up and tear-down logic into test support services utilized by the appropriate test cases, to keep each test oracle focused on only the results necessary to validate its test, and to design time-related tests to allow tolerance for execution in non-real time operating systems. The common practice of allowing a 5-10 percent margin for late execution reduces the potential number of false negatives in test execution. It is also suggested to treat test code with the same respect as production code. Test code must work correctly for both positive and negative cases, last a long time, and be readable and maintainable. Teams can get together and review tests and test practices to share effective techniques and catch bad habits.[15]

Practices to avoid, or "anti-patterns"

Advantages and Disadvantages

Advantages

Test Driven Development (TDD) is a software development approach where tests are written before the actual code. It offers several advantages:

Comprehensive Test Coverage
TDD ensures that all new code is covered by at least one test, leading to more robust software.
Enhanced Confidence in Code
Developers gain greater confidence in the code's reliability and functionality.
Enhanced Confidence in Tests
As the tests are known to be failing without the proper implementation, we know that the tests actually tests the implementation correctly.
Well-Documented Code
The process naturally results in well-documented code, as each test clarifies the purpose of the code it tests.
Requirement Clarity
TDD encourages a clear understanding of requirements before coding begins.
Facilitates Continuous Integration
It integrates well with continuous integration processes, allowing for frequent code updates and testing.
Boosts Productivity
Many developers find that TDD increases their productivity.
Reinforces Code Mental Model
TDD helps in building a strong mental model of the code's structure and behavior.
Emphasis on Design and Functionality
It encourages a focus on the design, interface, and overall functionality of the program.
Reduces Need for Debugging
By catching issues early in the development process, TDD reduces the need for extensive debugging later.
System Stability
Applications developed with TDD tend to be more stable and less prone to bugs.[19]

Disadvantages

However, TDD is not without its drawbacks:

Increased Code Volume
Implementing TDD can result in a larger codebase as tests add to the total amount of code written.
False Security from Tests
A large number of passing tests can sometimes give a misleading sense of security regarding the code's robustness.
Maintenance Overheads
Maintaining a large suite of tests can add overhead to the development process.
Time-Consuming Test Processes
Writing and maintaining tests can be time-consuming.
Testing Environment Set-Up
TDD requires setting up and maintaining a suitable testing environment.
Learning Curve
It takes time and effort to become proficient in TDD practices.
Overcomplication
Designing code to cater for complex tests via TDD can lead to code that is more complicated than necessary.
Neglect of Overall Design
Focusing too narrowly on passing tests can sometimes lead to neglect of the bigger picture in software design.