Troubleshooting failures
You can add diagnostic information to your tests by using clues and log statements.
You should add information when you write a test, and not just when it fails. This will save you needing to write more code when your test fails later.
This information will only be displayed when a test fails, so will not clutter your test report.
Using clue
with expect
The expect
and expect.all
statements are used to assert on boolean values. On their own, they don't print any diagnostic information.
You can add information by using the clue
function.
val x = List(1, 2, 3)
expect(clue(x.size) > 2)
You can add clues around any variable.
expect(clue(x).size > 2)
You can also nest them.
expect(clue(clue(x).size) > 2)
The clues are displayed when the test fails.
import weaver._
object ExpectWithClueTest extends FunSuite {
test("This fails, and we can diagnose why") {
val x = 1
val y = 2
expect(clue(x) > clue(y))
}
test("This fails, but we don't know why") {
val x = 1
val y = 2
expect(x > y)
}
test("This succeeds so nothing is printed") {
val x = 1
val y = 0
expect(clue(x) > clue(y))
}
}
Example report
repl.MdocSessionMdocAppExpectWithClueTest + This succeeds so nothing is printed 0ms - This fails, and we can diagnose why 0ms - This fails, but we don't know why 0ms *************FAILURES************* repl.MdocSessionMdocAppExpectWithClueTest - This fails, and we can diagnose why 0ms
assertion failed (troubleshooting_failures.md:46)
expect(clue(x) > clue(y))
Clues {
x: Int = 1
y: Int = 2
}
troubleshooting_failures.md:46
expect(clue(x) > clue(y))
^ - This fails, but we don't know why 0ms
assertion failed (troubleshooting_failures.md:52)
expect(x > y)
Use the `clue` function to troubleshoot
troubleshooting_failures.md:52
expect(x > y)
^ Total 3, Failed 2, Passed 1, Ignored 0
Logging information
You can log general information using a lazy logger. Log statements are only printed when a test fails and are displayed along with the test failure.
Access the logger by writing a loggedTest
.
import weaver._
object LoggedTests extends SimpleIOSuite {
case class Response(body: String, status: Int)
loggedTest("This test fails") { log =>
val response = Response("this went wrong", 500)
for {
_ <- log.info(s"The response body is: ${response.body}")
} yield expect.eql(response.status, 200)
}
loggedTest("This test also fails, and has separate logs") { log =>
val response = Response("this also went wrong", 400)
for {
_ <- log.info(s"The response body is: ${response.body}")
} yield expect.eql(response.status, 200)
}
loggedTest("This test succeeds, so no logs are printed") { log =>
val response = Response("OK", 200)
for {
_ <- log.info(s"The response body is: ${response.body}")
} yield expect.eql(response.status, 200)
}
}
Example report
repl.MdocSessionMdocAppLoggedTests + This test succeeds, so no logs are printed 10ms - This test fails 15ms - This test also fails, and has separate logs 13ms *************FAILURES************* repl.MdocSessionMdocAppLoggedTests - This test fails 15ms
Values not equal: (troubleshooting_failures.md:83)
=> Diff (- obtained, + expected)
-200
+500
troubleshooting_failures.md:83
} yield expect.eql(response.status, 200)
^
[INFO] 12:51:23 [troubleshooting_failures.md:82] The response body is: this went wrong - This test also fails, and has separate logs 13ms
Values not equal: (troubleshooting_failures.md:90)
=> Diff (- obtained, + expected)
-200
+400
troubleshooting_failures.md:90
} yield expect.eql(response.status, 200)
^
[INFO] 12:51:23 [troubleshooting_failures.md:89] The response body is: this also went wrong Total 3, Failed 2, Passed 1, Ignored 0
Use log.info
instead of println
You should use log statements instead of println
statements.
Because weaver runs tests in parallel, it is difficult to pinpoint which test a println
statement corresponds to.
Unlike log statements, println
statements are always printed, regardless of whether a test succeeds or fails, so can clutter the test output.
Log statements are only printed if a test fails. The logs statements are displayed along with the test failure they correspond to, so it's easy to troubleshoot tests even though they are run in parallel.
Accessing the logger along with shared resources
You can access the logger along with shared resources by using the second argument of the test
function.
import weaver._
import cats.effect._
import org.http4s.ember.client._
import org.http4s.client._
object HttpSuite extends IOSuite {
override type Res = Client[IO]
override def sharedResource : Resource[IO, Res] =
EmberClientBuilder.default[IO].build
// The log is passed as the second argument
test("This test fails") { (httpClient, log) =>
for {
statusCode <- httpClient.get("https://httpbin.org/oops") {
response => for {
_ <- log.info(s"Content length: ${response.contentLength}")
} yield response.status.code
}
} yield expect.eql(statusCode, 200)
}
}
Example report
repl.MdocSessionMdocAppHttpSuite - This test fails 610ms *************FAILURES************* repl.MdocSessionMdocAppHttpSuite - This test fails 610ms
Values not equal: (troubleshooting_failures.md:137)
=> Diff (- obtained, + expected)
-200
+503
troubleshooting_failures.md:137
} yield expect.eql(statusCode, 200)
^
[INFO] 12:51:24 [troubleshooting_failures.md:134] Content length: Some(162) Total 1, Failed 1, Passed 0, Ignored 0
Displaying the correct source location
Assertions such as expect
and expect.eql
display their source location when they fail.
If you make assertions in helper functions, you might want to display the source location of the call to the helper.
You can do this by requiring an implicit SourceLocation
argument.
import weaver._
object SourceLocationSuite extends FunSuite {
case class Response(body: String, status: Int)
test("This test fails") {
val response = Response("this went wrong", 500)
expectOk(response) // The failure points here
}
test("This test fails with a different source location") {
val response = Response("this also went wrong", 500)
expectOk(response) // The failure points here
}
def expectOk(response: Response)(implicit loc: SourceLocation): Expectations = {
expect.eql(response.status, 200) && expect.eql(response.body, "OK")
}
}
Example report
repl.MdocSessionMdocAppSourceLocationSuite - This test fails 2ms - This test fails with a different source location 0ms *************FAILURES************* repl.MdocSessionMdocAppSourceLocationSuite - This test fails 2ms
[0] Values not equal: (troubleshooting_failures.md:160)
[0]
[0] => Diff (- obtained, + expected)
[0] -200
[0] +500
[0]
[0] troubleshooting_failures.md:160
[0] expectOk(response) // The failure points here
[0] ^
[1] Values not equal: (troubleshooting_failures.md:160)
[1]
[1] => Diff (- obtained, + expected)
[1] -OK
[1] +this went wrong
[1]
[1] troubleshooting_failures.md:160
[1] expectOk(response) // The failure points here
[1] ^ - This test fails with a different source location 0ms
[0] Values not equal: (troubleshooting_failures.md:165)
[0]
[0] => Diff (- obtained, + expected)
[0] -200
[0] +500
[0]
[0] troubleshooting_failures.md:165
[0] expectOk(response) // The failure points here
[0] ^
[1] Values not equal: (troubleshooting_failures.md:165)
[1]
[1] => Diff (- obtained, + expected)
[1] -OK
[1] +this also went wrong
[1]
[1] troubleshooting_failures.md:165
[1] expectOk(response) // The failure points here
[1] ^ Total 2, Failed 2, Passed 0, Ignored 0
Displaying a trace
If you have a test codebase with many nested helper functions, you may want to display a trace of source locations. You can do this with the traced(here)
function.
In the following example, expectOk
calls expectStatus
and expectOkBody
. When the test fails, the source location of both expectStatus
and expectOk
is displayed.
import weaver._
object TracingSuite extends FunSuite {
case class Response(body: String, status: Int)
test("This test fails") {
val response = Response("this went wrong", 500)
expectOk(response) // The failure points here
}
def expectOk(response: Response)(implicit loc: SourceLocation): Expectations = {
expectOkStatus(response)
.traced(here) // The failure points here too
}
def expectOkStatus(response: Response)(implicit loc: SourceLocation): Expectations = {
expect.eql(response.status, 200)
.traced(here) // The failure points here too
}
}
Example report
repl.MdocSessionMdocAppTracingSuite - This test fails 2ms *************FAILURES************* repl.MdocSessionMdocAppTracingSuite - This test fails 2ms
Values not equal: (troubleshooting_failures.md:192)
(troubleshooting_failures.md:202)
(troubleshooting_failures.md:197)
=> Diff (- obtained, + expected)
-200
+500
troubleshooting_failures.md:192
expectOk(response) // The failure points here
^
troubleshooting_failures.md:202
.traced(here) // The failure points here too
^
troubleshooting_failures.md:197
.traced(here) // The failure points here too
^ Total 1, Failed 1, Passed 0, Ignored 0