Browse Source

fix types, integrated challenges into ui, created levels and tests

pull/1/head
Stephanie Gredell 7 months ago
parent
commit
ec3d5b8bf0
  1. 59
      README.md
  2. 315
      cost.py
  3. 209
      data/levels.json
  4. 4
      internals/design/design.go
  5. 8
      internals/level/level.go
  6. 36
      internals/level/levels_test.go
  7. 12
      main.go
  8. 22
      static/game.html

59
README.md

@ -1,59 +0,0 @@
## Get Started
This guide describes how to use DigitalOcean App Platform to run a sample Golang application.
**Note**: Following these steps may result in charges for the use of DigitalOcean services.
### Requirements
* You need a DigitalOcean account. If you do not already have one, first [sign up](https://cloud.digitalocean.com/registrations/new).
## Deploy the App
Click the following button to deploy the app to App Platform. If you are not currently logged in with your DigitalOcean account, this button prompts you to log in.
[![Deploy to DigitalOcean](https://www.deploytodo.com/do-btn-blue.svg)](https://cloud.digitalocean.com/apps/new?repo=https://github.com/digitalocean/sample-golang/tree/main)
Note that, for the purposes of this tutorial, this button deploys the app directly from DigitalOcean's GitHub repository, which disables automatic redeployment since you cannot change our template. If you want automatic redeployment or you want to change the sample app's code to your own, we instead recommend you fork [our repository](https://github.com/digitalocean/sample-golang/tree/main).
To fork our repository, click the **Fork** button in the top-right of [its page on GitHub](https://github.com/digitalocean/sample-golang/tree/main), then follow the on-screen instructions. To learn more about forking repos, see the [GitHub documentation](https://docs.github.com/en/github/getting-started-with-github/fork-a-repo).
After forking the repo, you can view the same README in your own GitHub org; for example, in `https://github.com/<your-org>/sample-golang`. To deploy the new repo, visit the [control panel](https://cloud.digitalocean.com/apps) and click the **Create App** button. This takes you to the app creation page. Under **Service Provider**, select **GitHub**. Then, under **Repository**, select your newly-forked repo. Ensure that your branch is set to **main** and **Autodeploy** is checked on. Finally, click **Next**.
After clicking the **Deploy to DigitalOcean** button or completing the instructions above to fork the repo, follow these steps:
1. Configure the app, such as by specifying HTTP routes, declaring environment variables, or adding a database. For the purposes of this tutorial, this step is optional.
1. Provide a name for your app and select the region to deploy your app to, then click **Next**. By default, App Platform selects the region closest to you. Unless your app needs to interface with external services, your chosen region does not affect the app's performance, since to all App Platform apps are routed through a global CDN.
1. On the following screen, leave all the fields as they are and click **Next**.
1. Confirm your plan settings and how many containers you want to launch and click **Launch Basic/Pro App**.
After, you should see a "Building..." progress indicator. You can click **View Logs** to see more details of the build. It can take a few minutes for the build to finish, but you can follow the progress in the **Deployments** tab.
Once the build completes successfully, click the **Live App** link in the header and you should see your running application in a new tab, displaying the home page.
## Make Changes to Your App
If you forked our repo, you can now make changes to your copy of the sample app. Pushing a new change to the forked repo automatically redeploys the app to App Platform with zero downtime.
Here's an example code change you can make for this app:
1. Edit `main.go` and replace the "Hello!" greeting on line 39 with a different greeting.
1. Commit the change to the `main` branch. Normally it's a better practice to create a new branch for your change and then merge that branch to `main` after review, but for this demo you can commit to the `main` branch directly.
1. Visit the [control panel](https://cloud.digitalocean.com/apps) and navigate to your sample app.
1. You should see a "Building..." progress indicator, just like when you first created the app.
1. Once the build completes successfully, click the **Live App** link in the header and you should see your updated application running. You may need to force refresh the page in your browser (e.g. using **Shift** + **Reload**).
## Learn More
To learn more about App Platform and how to manage and update your application, see [our App Platform documentation](https://www.digitalocean.com/docs/app-platform/).
## Delete the App
When you no longer need this sample application running live, you can delete it by following these steps:
1. Visit the [Apps control panel](https://cloud.digitalocean.com/apps).
2. Navigate to the sample app.
3. In the **Settings** tab, click **Destroy**.
**Note**: If you do not delete your app, charges for using DigitalOcean services will continue to accrue.

315
cost.py

@ -1,315 +0,0 @@
from typing import NamedTuple, Dict, Tuple
from enum import Enum
class LoadBalancerSpec(NamedTuple):
capacity: float # e.g. float('inf')
baseLatency: int # ms
cost: int
class WebServerSmall(NamedTuple):
capacity: int
baseLatency: int
penaltyPerRPS: float
cost: int
class WebServerMedium(NamedTuple):
capacity: int
baseLatency: int
penaltyPerRPS: float
cost: int
class CacheStandard(NamedTuple):
capacity: int
baseLatency: int
penaltyPer10RPS: float
hitRates: Dict[str, float]
cost: int
class CacheLarge(NamedTuple):
capacity: int
baseLatency: int
penaltyPer10RPS: float
hitRates: Dict[str, float]
cost: int
class DbReadReplica(NamedTuple):
readCapacity: int # RPS
baseReadLatency: int # ms
penaltyPer10RPS: float
cost: int
class ComponentSpec(NamedTuple):
loadBalancer: LoadBalancerSpec
webServerSmall: WebServerSmall
webServerMedium: WebServerMedium
cacheStandard: CacheStandard
cacheLarge: CacheLarge
dbReadReplica: DbReadReplica
class Design(NamedTuple):
numWebServerSmall: int
numWebServerMedium: int
cacheType: str # Either "cacheStandard" or "cacheLarge"
cacheTTL: str
numDbReplicas: int
promotionDelaySeconds: int
class Level(NamedTuple):
id: int
description: str
targetRPS: int
maxP95Latency: int
maxMonthlyCost: int
requiredAvailability: int
failureEvents: list
componentSpec: ComponentSpec
simulatedDurationSeconds: int
class CacheType(Enum):
STANDARD = "cacheStandard"
LARGE = "cacheLarge"
class LevelSimulator:
def __init__(self, level: Level, design: Design):
self.level = level
self.design = design
self.specs = self.level.componentSpec
def compute_cost(self) -> int:
s = self.specs
d = self.design
cost_lb = s.loadBalancer.cost
cost_ws_small = d.numWebServerSmall * s.webServerSmall.cost
cost_ws_medium = d.numWebServerMedium * s.webServerMedium.cost
if d.cacheType == CacheType.STANDARD.value:
cost_cache = s.cacheStandard.cost
else:
cost_cache = s.cacheLarge.cost
# “1” here stands for the master; add d.numDbReplicas for replicas
cost_db = s.dbReadReplica.cost * (1 + d.numDbReplicas)
return cost_lb + cost_ws_small + cost_ws_medium + cost_cache + cost_db
def compute_rps(self) -> Tuple[float, float]:
"""
Returns (hits_rps, misses_rps) for a read workload of size level.targetRPS.
"""
s = self.specs
d = self.design
total_rps = self.level.targetRPS
if d.cacheType == CacheType.STANDARD.value:
hit_rate = s.cacheStandard.hitRates[d.cacheTTL]
else:
hit_rate = s.cacheLarge.hitRates[d.cacheTTL]
hits_rps = total_rps * hit_rate
misses_rps = total_rps * (1 - hit_rate)
return hits_rps, misses_rps
def compute_latencies(self) -> Dict[str, float]:
"""
Computes:
- L95_ws (worst P95 among small/medium, given misses_rps)
- L95_cache (baseLatency)
- L95_db_read (based on misses_rps and replicas)
- L95_total_read = miss_path (since misses are slower)
"""
s = self.specs
d = self.design
# 1) First compute hits/misses
_, misses_rps = self.compute_rps()
# 2) Web server P95
cap_small = s.webServerSmall.capacity
cap_medium = s.webServerMedium.capacity
weighted_count = d.numWebServerSmall + (2 * d.numWebServerMedium)
if weighted_count == 0:
L95_ws = float("inf")
else:
load_per_weighted = misses_rps / weighted_count
L95_ws_small = 0.0
if d.numWebServerSmall > 0:
if load_per_weighted <= cap_small:
L95_ws_small = s.webServerSmall.baseLatency
else:
L95_ws_small = (
s.webServerSmall.baseLatency
+ s.webServerSmall.penaltyPerRPS
* (load_per_weighted - cap_small)
)
L95_ws_medium = 0.0
# <<== FIXED: change “> 00” to “> 0”
if d.numWebServerMedium > 0:
if load_per_weighted <= cap_medium:
L95_ws_medium = s.webServerMedium.baseLatency
else:
L95_ws_medium = (
s.webServerMedium.baseLatency
+ s.webServerMedium.penaltyPerRPS
* (load_per_weighted - cap_medium)
)
L95_ws = max(L95_ws_small, L95_ws_medium)
# 3) Cache P95
if d.cacheType == CacheType.STANDARD.value:
L95_cache = s.cacheStandard.baseLatency
else:
L95_cache = s.cacheLarge.baseLatency
# 4) DB read P95
read_cap = s.dbReadReplica.readCapacity
base_read_lat = s.dbReadReplica.baseReadLatency
pen_per10 = s.dbReadReplica.penaltyPer10RPS
num_reps = d.numDbReplicas
if num_reps == 0:
if misses_rps <= read_cap:
L95_db_read = base_read_lat
else:
excess = misses_rps - read_cap
L95_db_read = base_read_lat + pen_per10 * (excess / 10.0)
else:
load_per_rep = misses_rps / num_reps
if load_per_rep <= read_cap:
L95_db_read = base_read_lat
else:
excess = load_per_rep - read_cap
L95_db_read = base_read_lat + pen_per10 * (excess / 10.0)
# 5) End-to-end P95 read = miss_path
L_lb = s.loadBalancer.baseLatency
miss_path = L_lb + L95_ws + L95_db_read
L95_total_read = miss_path
return {
"L95_ws": L95_ws,
"L95_cache": L95_cache,
"L95_db_read": L95_db_read,
"L95_total_read": L95_total_read,
}
def compute_availability(self) -> float:
"""
If failureEvents=[], just return 100.0.
Otherwise:
- For each failure (e.g. DB master crash at t_crash),
if numDbReplicas==0 downtime = sim_duration - t_crash
else if design has auto_failover:
downtime = failover_delay
else:
downtime = sim_duration - t_crash
- availability = (sim_duration - total_downtime) / sim_duration * 100
"""
sim_duration = self.level.simulatedDurationSeconds # you’d need this field
total_downtime = 0
for event in self.level.failureEvents:
t_crash = event["time"]
if event["type"] == "DB_MASTER_CRASH":
if self.design.numDbReplicas == 0:
total_downtime += (sim_duration - t_crash)
else:
# assume a fixed promotion delay (e.g. 5s)
delay = self.design.promotionDelaySeconds
total_downtime += delay
# (handle other event types if needed)
return (sim_duration - total_downtime) / sim_duration * 100
def validate(self) -> dict:
"""
1) Cost check
2) Throughput checks (cache, DB, WS)
3) Latency check
4) Availability check (if there are failureEvents)
Return { "pass": True, "metrics": {...} } or { "pass": False, "reason": "..." }.
"""
total_cost = self.compute_cost()
if total_cost > self.level.maxMonthlyCost:
return { "pass": False, "reason": f"Budget ${total_cost} > ${self.level.maxMonthlyCost}" }
hits_rps, misses_rps = self.compute_rps()
# Cache capacity
cache_cap = (
self.specs.cacheStandard.capacity
if self.design.cacheType == CacheType.STANDARD.value
else self.specs.cacheLarge.capacity
)
if hits_rps > cache_cap:
return { "pass": False, "reason": f"Cache overloaded ({hits_rps:.1f} RPS > {cache_cap})" }
# DB capacity
db_cap = self.specs.dbReadReplica.readCapacity
if self.design.numDbReplicas == 0:
if misses_rps > db_cap:
return { "pass": False, "reason": f"DB overloaded ({misses_rps:.1f} RPS > {db_cap})" }
else:
per_rep = misses_rps / self.design.numDbReplicas
if per_rep > db_cap:
return {
"pass": False,
"reason": f"DB replicas overloaded ({per_rep:.1f} RPS/replica > {db_cap})"
}
# WS capacity
total_ws_cap = (
self.design.numWebServerSmall * self.specs.webServerSmall.capacity
+ self.design.numWebServerMedium * self.specs.webServerMedium.capacity
)
if misses_rps > total_ws_cap:
return {
"pass": False,
"reason": f"Web servers overloaded ({misses_rps:.1f} RPS > {total_ws_cap})"
}
# Latency
lat = self.compute_latencies()
if lat["L95_total_read"] > self.level.maxP95Latency:
return {
"pass": False,
"reason": f"P95 too high ({lat['L95_total_read']:.1f} ms > {self.level.maxP95Latency} ms)"
}
# Availability (only if failureEvents is nonempty)
availability = 100.0
if self.level.failureEvents:
availability = self.compute_availability()
if availability < self.level.requiredAvailability:
return {
"pass": False,
"reason": f"Availability too low ({availability:.1f}% < "
f"{self.level.requiredAvailability}%)"
}
# If we reach here, all checks passed
return {
"pass": True,
"metrics": {
"cost": total_cost,
"p95": lat["L95_total_read"],
"achievedRPS": self.level.targetRPS,
"availability": (
100.0 if not self.level.failureEvents else availability
)
}
}

209
data/levels.json

@ -0,0 +1,209 @@
[
{
"id": "url-shortener-easy",
"name": "URL Shortener",
"description": "Build a basic service to shorten URLs with a single backend.",
"difficulty": "easy",
"targetRps": 100,
"durationSec": 60,
"maxMonthlyUsd": 100,
"maxP95LatencyMs": 200,
"requiredAvailabilityPct": 99.0,
"mustInclude": ["database"],
"hints": ["Start with a basic backend and persistent storage."]
},
{
"id": "url-shortener-medium",
"name": "URL Shortener",
"description": "Scale your URL shortener to handle traffic spikes and ensure high availability.",
"difficulty": "medium",
"targetRps": 1000,
"durationSec": 180,
"maxMonthlyUsd": 300,
"maxP95LatencyMs": 150,
"requiredAvailabilityPct": 99.9,
"mustInclude": ["database", "loadBalancer"],
"encouragedComponents": ["cache"],
"hints": ["Consider caching and horizontal scaling."]
},
{
"id": "url-shortener-hard",
"name": "URL Shortener",
"description": "Design a globally distributed URL shortening service with low latency and high availability.",
"difficulty": "hard",
"targetRps": 10000,
"durationSec": 300,
"maxMonthlyUsd": 1000,
"maxP95LatencyMs": 100,
"requiredAvailabilityPct": 99.99,
"mustInclude": ["cdn", "database"],
"encouragedComponents": ["cache", "messageQueue"],
"hints": ["Think about write-path consistency and global replication."]
},
{
"id": "chat-app-easy",
"name": "Chat App",
"description": "Implement a simple chat app for small group communication.",
"difficulty": "easy",
"targetRps": 50,
"durationSec": 120,
"maxMonthlyUsd": 150,
"maxP95LatencyMs": 300,
"requiredAvailabilityPct": 99.0,
"mustInclude": ["webserver", "database"],
"hints": ["You don’t need to persist every message yet."]
},
{
"id": "chat-app-medium",
"name": "Chat App",
"description": "Support real-time chat across mobile and web, with message persistence.",
"difficulty": "medium",
"targetRps": 500,
"durationSec": 300,
"maxMonthlyUsd": 500,
"maxP95LatencyMs": 200,
"requiredAvailabilityPct": 99.9,
"mustInclude": ["webserver", "database", "messageQueue"],
"encouragedComponents": ["cache"],
"hints": ["Ensure you decouple frontend from persistence."]
},
{
"id": "chat-app-hard",
"name": "Chat App",
"description": "Design a Slack-scale chat platform supporting typing indicators, read receipts, and delivery guarantees.",
"difficulty": "hard",
"targetRps": 5000,
"durationSec": 600,
"maxMonthlyUsd": 1500,
"maxP95LatencyMs": 100,
"requiredAvailabilityPct": 99.99,
"mustInclude": ["messageQueue", "database"],
"discouragedComponents": ["single-instance webserver"],
"hints": ["Think about pub/sub, retries, and ordering guarantees."]
},
{
"id": "netflix-easy",
"name": "Netflix Clone",
"description": "Build a basic video streaming service with direct file access.",
"difficulty": "easy",
"targetRps": 200,
"durationSec": 300,
"maxMonthlyUsd": 500,
"maxP95LatencyMs": 500,
"requiredAvailabilityPct": 99.0,
"mustInclude": ["cdn"],
"hints": ["You don’t need full-blown adaptive streaming yet."]
},
{
"id": "netflix-medium",
"name": "Netflix Clone",
"description": "Add video transcoding, caching, and recommendations.",
"difficulty": "medium",
"targetRps": 1000,
"durationSec": 600,
"maxMonthlyUsd": 2000,
"maxP95LatencyMs": 300,
"requiredAvailabilityPct": 99.9,
"mustInclude": ["cdn", "data pipeline", "cache"],
"encouragedComponents": ["monitoring/alerting"],
"hints": ["Think about asynchronous jobs and caching strategy."]
},
{
"id": "netflix-hard",
"name": "Netflix Clone",
"description": "Design a globally resilient, multi-region Netflix-scale system with intelligent failover and real-time telemetry.",
"difficulty": "hard",
"targetRps": 10000,
"durationSec": 900,
"maxMonthlyUsd": 10000,
"maxP95LatencyMs": 200,
"requiredAvailabilityPct": 99.999,
"mustInclude": ["cdn", "data pipeline", "monitoring/alerting"],
"encouragedComponents": ["messageQueue", "cache", "third party service"],
"hints": ["You’ll need intelligent routing and fallback mechanisms."]
},
{
"id": "rate-limiter-easy",
"name": "Rate Limiter",
"description": "Build a basic in-memory rate limiter for a single instance service.",
"difficulty": "easy",
"targetRps": 200,
"durationSec": 60,
"maxMonthlyUsd": 50,
"maxP95LatencyMs": 100,
"requiredAvailabilityPct": 99.0,
"mustInclude": ["webserver"],
"hints": ["Use an in-memory store and sliding window or token bucket."]
},
{
"id": "rate-limiter-medium",
"name": "Rate Limiter",
"description": "Design a rate limiter that works across multiple instances and enforces global quotas.",
"difficulty": "medium",
"targetRps": 1000,
"durationSec": 180,
"maxMonthlyUsd": 300,
"maxP95LatencyMs": 50,
"requiredAvailabilityPct": 99.9,
"mustInclude": ["webserver", "cache"],
"encouragedComponents": ["messageQueue"],
"hints": ["Consider Redis or distributed token buckets. Account for clock drift."]
},
{
"id": "rate-limiter-hard",
"name": "Rate Limiter",
"description": "Build a globally distributed rate limiter with per-user and per-region policies.",
"difficulty": "hard",
"targetRps": 5000,
"durationSec": 300,
"maxMonthlyUsd": 1000,
"maxP95LatencyMs": 30,
"requiredAvailabilityPct": 99.99,
"mustInclude": ["cache"],
"encouragedComponents": ["cdn", "data pipeline", "monitoring/alerting"],
"hints": ["Ensure low latency despite distributed state. Avoid single points of failure."]
},
{
"id": "metrics-system-easy",
"name": "Metrics System",
"description": "Create a basic system that collects and stores custom app metrics locally.",
"difficulty": "easy",
"targetRps": 100,
"durationSec": 120,
"maxMonthlyUsd": 100,
"maxP95LatencyMs": 200,
"requiredAvailabilityPct": 99.0,
"mustInclude": ["webserver", "database"],
"hints": ["Start by storing metrics as timestamped values in a simple DB."]
},
{
"id": "metrics-system-medium",
"name": "Metrics System",
"description": "Design a pull-based metrics system like Prometheus that scrapes multiple services.",
"difficulty": "medium",
"targetRps": 1000,
"durationSec": 300,
"maxMonthlyUsd": 500,
"maxP95LatencyMs": 100,
"requiredAvailabilityPct": 99.9,
"mustInclude": ["data pipeline", "monitoring/alerting"],
"encouragedComponents": ["cache"],
"hints": ["Consider time-series indexing and label-based queries."]
},
{
"id": "metrics-system-hard",
"name": "Metrics System",
"description": "Build a scalable, multi-tenant metrics platform with real-time alerts and dashboard support.",
"difficulty": "hard",
"targetRps": 5000,
"durationSec": 600,
"maxMonthlyUsd": 1500,
"maxP95LatencyMs": 50,
"requiredAvailabilityPct": 99.99,
"mustInclude": ["monitoring/alerting", "data pipeline"],
"encouragedComponents": ["messageQueue", "cache", "third party service"],
"hints": ["Think about downsampling, alert thresholds, and dashboard queries."]
}
]

4
internals/design/design.go

@ -5,7 +5,7 @@ import "encoding/json"
type Node struct { type Node struct {
ID string `json:"id"` ID string `json:"id"`
Type string `json:"type"` Type string `json:"type"`
Position Position `josn:"position"` Position Position `json:"position"`
Props map[string]interface{} `json:"props"` Props map[string]interface{} `json:"props"`
} }
@ -20,7 +20,7 @@ type Connection struct {
Label string `json:"label,omitempty"` Label string `json:"label,omitempty"`
Direction string `json:"direction,omitempty"` Direction string `json:"direction,omitempty"`
Protocol string `json:"protocol,omitempty"` Protocol string `json:"protocol,omitempty"`
TLS bool `json:"tls,omitemity"` TLS bool `json:"tls,omitempty"`
Capacity int `json:"capacity,omitempty"` Capacity int `json:"capacity,omitempty"`
} }

8
internals/level/level.go

@ -49,16 +49,14 @@ type FailureEvent struct {
} }
func LoadLevels(path string) ([]Level, error) { func LoadLevels(path string) ([]Level, error) {
file, err := os.Open(path) data, err := os.ReadFile(path)
if err != nil { if err != nil {
return nil, fmt.Errorf("Error opening levels.json: %w", err) return nil, fmt.Errorf("Error opening levels.json: %w", err)
} }
defer file.Close()
var levels []Level var levels []Level
err = json.NewDecoder(file).Decode(&levels) if err := json.Unmarshal(data, &levels); err != nil {
if err != nil { return nil, fmt.Errorf("Error parsing levels.json: %w", err)
return nil, fmt.Errorf("Error decoding levels.json: %w", err)
} }
return levels, nil return levels, nil

36
internals/level/levels_test.go

@ -0,0 +1,36 @@
package level
import (
"fmt"
"os"
"path/filepath"
"testing"
)
func TestLoadLevels(t *testing.T) {
path := filepath.Join("..", "..", "data", "levels.json")
cwd, _ := os.Getwd()
fmt.Println("Current working directory: ", cwd)
fmt.Println("loading path: ", path)
levels, err := LoadLevels(path)
if err != nil {
t.Fatalf("failed to load levels.json: %v", err)
}
if len(levels) == 0 {
t.Fatalf("expected at least one level, got 0")
}
InitRegistry(levels)
lvl, err := GetLevel("Metrics System", DifficultyHard)
if err != nil {
t.Fatalf("expected to retrieve Metrics System (hard), got %v", err)
}
if lvl.Difficulty != DifficultyHard {
t.Errorf("unexpected difficulty: got %s, want %s", lvl.Difficulty, DifficultyHard)
}
}

12
main.go

@ -6,6 +6,7 @@ import (
"net/http" "net/http"
"os" "os"
"os/signal" "os/signal"
"systemdesigngame/internals/level"
"time" "time"
) )
@ -48,11 +49,16 @@ func index(w http.ResponseWriter, r *http.Request) {
} }
func game(w http.ResponseWriter, r *http.Request) { func game(w http.ResponseWriter, r *http.Request) {
var err error
levels, err := level.LoadLevels("data/levels.json")
if err != nil {
panic("failed to load levels: " + err.Error())
}
data := struct { data := struct {
Title string Levels []level.Level
}{ }{
Title: "Title", Levels: levels,
} }
tmpl.ExecuteTemplate(w, "game.html", data) tmpl.ExecuteTemplate(w, "game.html", data)
} }

22
static/game.html

@ -549,22 +549,12 @@
<h2 class="sidebar-title">Challenges</h2> <h2 class="sidebar-title">Challenges</h2>
<ul class="challenge-list"> <ul class="challenge-list">
<li class="challenge-item active"> {{range .Levels}}
<div class="challenge-name">Url Shortener</div> <li class="challenge-item">
<div class="challenge-difficulty easy">Easy</div> <div class="challenge-name">{{.Name}}</div>
</li> <div class="challenge-difficulty {{.Difficulty}}">{{.Difficulty}}</div>
<li class="challenge-item"> </li>
<div class="challenge-name">Url Shortener</div> {{end}}
<div class="challenge-difficulty easy">Easy</div>
</li>
<li class="challenge-item">
<div class="challenge-name">Url Shortener</div>
<div class="challenge-difficulty medium">Medium</div>
</li>
<li class="challenge-item">
<div class="challenge-name">Something hard</div>
<div class="challenge-difficulty hard">Hard</div>
</li>
</ul> </ul>
</div> </div>
<div id="canvas-wrapper"> <div id="canvas-wrapper">

Loading…
Cancel
Save