func TestHotWriteRegionScheduleByteRateOnlyWithTiFlash
has a cyclomatic complexity of 21 with "high" risk 621 clearPendingInfluence(hb.(*hotScheduler))
622}
623
624func TestHotWriteRegionScheduleByteRateOnlyWithTiFlash(t *testing.T) { 625 re := require.New(t)
626 cancel, _, tc, oc := prepareSchedulersTest()
627 defer cancel()
func checkHotWriteRegionScheduleByteRateOnly
has a cyclomatic complexity of 18 with "high" risk 445 operatorutil.CheckTransferPeerWithLeaderTransfer(re, ops[0], operator.OpHotRegion, 1, 2)
446}
447
448func checkHotWriteRegionScheduleByteRateOnly(re *require.Assertions, enablePlacementRules bool) { 449 cancel, opt, tc, oc := prepareSchedulersTest()
450 defer cancel()
451 tc.SetClusterVersion(versioninfo.MinSupportedVersion(versioninfo.ConfChangeV2))
func betterThanV1
has a cyclomatic complexity of 18 with "high" risk1337}
1338
1339// betterThan checks if `bs.cur` is a better solution than `old`.
1340func (bs *balanceSolver) betterThanV1(old *solution) bool {1341 if old == nil || bs.cur.progressiveRank <= splitProgressiveRank {
1342 return true
1343 }
func filterDstStores
has a cyclomatic complexity of 17 with "high" risk1047}
1048
1049// filterDstStores select the candidate store by filters
1050func (bs *balanceSolver) filterDstStores() map[uint64]*statistics.StoreLoadDetail {1051 var (
1052 filters []filter.Filter
1053 candidates []*statistics.StoreLoadDetail
func solve
has a cyclomatic complexity of 21 with "high" risk 685
686// solve travels all the src stores, hot peers, dst stores and select each one of them to make a best scheduling solution.
687// The comparing between solutions is based on calcProgressiveRank.
688func (bs *balanceSolver) solve() []*operator.Operator { 689 if !bs.isValid() {
690 return nil
691 }
A function with high cyclomatic complexity can be hard to understand and maintain. Cyclomatic complexity is a software metric that measures the number of independent paths through a function. A higher cyclomatic complexity indicates that the function has more decision points and is more complex.
Functions with high cyclomatic complexity are more likely to have bugs and be harder to test. They may lead to reduced code maintainability and increased development time.
To reduce the cyclomatic complexity of a function, you can:
package main
import "log"
func fizzbuzzfuzz(x int) { // cc = 1
if x == 0 || x < 0 { // cc = 3 (if, ||)
return
}
for i := 1; i <= x; i++ { // cc = 4 (for)
switch i % 15 * 2 {
case 0: // cc = 5 (case)
countDiv3 += 1
countDiv5 += 1
log.Println("fizzbuzz")
break
case 3:
case 6:
case 9:
case 12: // cc = 9 (case)
countDiv3 += 1
log.Println("fizz")
break
case 5:
case 10: // cc = 11 (case)
countDiv5 += 1
log.Println("buzz")
break
default:
log.Printf("%d\n", x)
}
}
} // CC == 11; raises issues
package main
import "log"
func fizzbuzz(x int) { // cc = 1
for i := 1; i <= x; i++ { // cc = 2 (for)
y := i%3 == 0
z := i%5 == 0
if y == z { // 3
if y == false { // 4
log.Printf("%d\n", i)
} else {
log.Println("fizzbuzz")
}
} else {
if y { // 5
log.Println("fizz")
} else {
log.Println("buzz")
}
}
}
} // CC == 5
Cyclomatic complexity threshold can be configured using the
cyclomatic_complexity_threshold
(docs) in the
.deepsource.toml
config file.
Configuring this is optional. If you don't provide a value, the Analyzer will
raise issues for functions with complexity higher than the default threshold,
which is medium
(only raise issues for >15) for the Go Analyzer.
Here's the mapping of the risk category to the cyclomatic complexity score to help you configure this better:
Risk category | Cyclomatic complexity range | Recommended action |
---|---|---|
low | 1-5 | No action needed. |
medium | 6-15 | Review and monitor. |
high | 16-25 | Review and refactor. Recommended to add comments if the function is absolutely needed to be kept as it is. |
very-high. | 26-50 | Refactor to reduce the complexity. |
critical | >50 | Must refactor this. This can make the code untestable and very difficult to understand. |