Compare commits

...

62 Commits

Author SHA1 Message Date
JamesFlare1212
1340ff2a0e add slides for reading-9787562498056 2025-05-06 10:03:45 -04:00
JamesFlare1212
e2398a8a39 add reading-9787562498056 2025-05-05 19:11:30 -04:00
JamesFlare1212
6a996f1c09 fix LaTeX passthrough 2025-05-04 15:35:08 -04:00
JamesFlare1212
68ffcf4956 pre-release of cards-sue-hbd-20 2025-05-04 15:24:15 -04:00
JamesFlare1212
a38c21b1a9 add common-terms 2025-04-25 18:41:06 -04:00
JamesFlare1212
54bf5ca168 update to FixIt 0.3.18 2025-04-15 14:54:07 -04:00
JamesFlare1212
df2c11ae67 add csci-1200-hw-3 2025-02-20 19:13:06 -05:00
JamesFlare1212
c501581f39 update csci-1200-hw-1 2025-02-16 14:06:19 -05:00
JamesFlare1212
dc9bda2b37 add csci-1200-hw-1 2025-02-16 14:05:36 -05:00
JamesFlare1212
2f3f75d3f2 improve engr-2350-quiz-02 2025-02-14 00:20:10 -05:00
aebff3d595 add engr-2350-quiz-02 2025-02-13 12:55:48 -05:00
JamesFlare1212
83fb593dd6 update engr-2350-lab-01 2025-02-10 23:31:39 -05:00
JamesFlare1212
764bb967f6 improve seo for engr-2350-lab-01 2025-02-10 21:58:40 -05:00
d558d5834e add engr-2350-lab-01 2025-02-10 21:51:57 -05:00
JamesFlare1212
21e87884c6 update ollama-deepseek-r1-distill 2025-02-10 00:43:43 -05:00
JamesFlare1212
c968a3ae00 add ollama-deepseek-r1-distill 2025-02-09 03:56:34 -05:00
JamesFlare1212
6726a156b3 add csci-1200-hw-2 2025-01-31 13:20:17 -05:00
a10a010701 update theme 2025-01-23 13:19:11 -05:00
JamesFlare1212
f4af875bb3 create csci-1200-hw-2 2025-01-22 10:35:21 -05:00
JamesFlare1212
4b57659747 improve studio-0-linux-2016-2 2025-01-13 19:40:09 -05:00
JamesFlare1212
c7f48c6fe9 add studio-0-linux-2016-2 2025-01-09 22:31:59 -05:00
594aa545da improve wording and update changes in x5 rom 2024-12-29 22:37:43 -05:00
286a3d2f32 update theme and minor fixes 2024-12-29 21:07:13 -05:00
df94c4721b update dsas-cca-api 2024-12-21 06:27:54 -05:00
JamesFlare1212
2fa41ea756 add ecse-1010-poc-lab03 2024-12-18 02:06:38 -05:00
JamesFlare1212
4eb7cd640b add csci-1100-crib-sheets 2024-12-08 22:42:38 -05:00
JamesFlare1212
5f52c8b518 improve ecse-1010-poc-lab02 2024-11-28 15:18:17 -05:00
JamesFlare1212
0e9f89d331 add ecse-1010-poc-lab02 2024-11-28 15:13:08 -05:00
JamesFlare1212
44447ec5b8 update umami tracker 2024-11-28 12:41:10 -05:00
JamesFlare1212
df21d69df4 fix katex and table style 2024-11-20 15:29:21 -05:00
JamesFlare1212
47358d25a8 improve katex style 2024-11-19 16:54:06 -05:00
JamesFlare1212
fc70b1776c resize pdf in posts 2024-11-19 08:15:05 -05:00
JamesFlare1212
e1e1e305bc optimize pdf in posts 2024-11-19 08:01:10 -05:00
JamesFlare1212
323c90005b add ecse-1010-poc-lab01 2024-11-19 07:46:14 -05:00
JamesFlare1212
ccb5641919 update theme 2024-11-18 01:27:38 -05:00
JamesFlare1212
2a881236cc fix url error 2024-11-07 21:15:07 -05:00
JamesFlare1212
218dcad925 update theme 2024-11-07 21:15:07 -05:00
JamesFlare1212
02cce87a93 update hw-7 solutions 2024-11-07 21:15:07 -05:00
637bdeb6c0 flarum-queue 2024-10-25 13:31:28 -04:00
JamesFlare1212
7222828595 improve translation and csci-1100-hw-7 2024-09-21 10:59:36 -04:00
JamesFlare1212
7878b7623b add install-lobechat-db 2024-09-16 04:18:31 -04:00
JamesFlare1212
d3c9fe3a83 update theme and csci-1100 hw8 2024-09-14 03:43:46 -04:00
71e64a0329 update picea-power-x5-qa 2024-08-01 11:50:27 +08:00
9240ca2d83 remove x5 firmware changelog 2024-07-19 23:45:06 +08:00
cedd0a3830 update picea-power-x5-qa 2024-07-07 00:52:54 +08:00
54019e743b improve picea-power-x5-qa 2024-07-05 22:59:22 +08:00
a148c972e1 improve picea-power-x5-qa 2024-07-05 22:57:37 +08:00
521ce492c0 improve picea-power-x5-qa 2024-07-05 21:24:27 +08:00
34011a8589 improve picea-power-x5-qa 2024-07-05 00:00:10 +08:00
dcf2763424 picea-power-x5-qa 2024-07-04 22:57:21 +08:00
021d9935d4 dsas-cca-api 2024-06-30 23:43:49 +08:00
f1677360fc improve sing-box config and update friend info 2024-06-26 01:03:41 +08:00
574ceec1ff fix bad tun name 2024-05-26 23:15:26 +08:00
bf4a723a7a add sing-box macos config 2024-05-12 07:32:35 +08:00
5de976f4b3 improve sing-box ipv6 config 2024-05-12 06:53:48 +08:00
a24aec9914 update sing-box config 2024-05-07 14:37:02 +08:00
c86e21521b get-my-proxy 2024-05-02 21:55:01 -04:00
JamesFlare1212
2a1ed5983f update theme 2024-04-24 13:04:12 -04:00
JamesFlare1212
06085a2b19 new git card 2024-04-17 17:36:26 -04:00
JamesFlare1212
a100b6794e fix estimated read time 2024-04-17 14:29:44 -04:00
JamesFlare1212
3b991a8135 csci-1100-hw-6 2024-04-13 23:18:26 -04:00
JamesFlare1212
320e5c296b cc-attack-on-index-php 2024-04-13 21:20:17 -04:00
426 changed files with 29807 additions and 7126 deletions

6
.gitignore vendored
View File

@@ -1,4 +1,6 @@
*.lock
public/
resources/
isableFastRander/
resources/_gen/
isableFastRander/
.hugo_build.lock
*Zone.Identifier

6
.gitmodules vendored
View File

@@ -2,3 +2,9 @@
path = themes/FixIt
url = https://github.com/hugo-fixit/FixIt.git
branch = dev
[submodule "themes/component-projects"]
path = themes/component-projects
url = https://github.com/hugo-fixit/component-projects.git
[submodule "themes/hugo-embed-pdf-shortcode"]
path = themes/hugo-embed-pdf-shortcode
url = https://github.com/anvithks/hugo-embed-pdf-shortcode.git

View File

@@ -7,15 +7,15 @@ This is a git repository for the FlareBlog. The Blog is based on [Hugo](https://
Clone the repository:
```bash
git clone https://github.com/JamesFlare1212/FlareBlog.git
git clone --recurse-submodules https://github.com/JamesFlare1212/FlareBlog.git
```
Then, install Hugo.
For Linux:
```bash
wget https://github.com/gohugoio/hugo/releases/download/v0.122.0/hugo_extended_0.122.0_linux-amd64.deb
dpkg -i hugo_extended_0.122.0_linux-amd64.deb
wget https://github.com/gohugoio/hugo/releases/download/v0.146.0/hugo_extended_0.146.0_linux-amd64.deb
sudo dpkg -i hugo_extended_0.146.0_linux-amd64.deb
```
For MacOS:

View File

@@ -2,6 +2,7 @@
title: {{ replace .TranslationBaseName "-" " " | title }}
subtitle:
date: {{ .Date }}
lastmod: {{ .Date }}
slug: {{ substr .File.UniqueID 0 7 }}
description:
keywords:

View File

@@ -2,13 +2,14 @@
title: {{ replace .TranslationBaseName "-" " " | title }}
subtitle:
date: {{ .Date }}
lastmod: {{ .Date }}
slug: {{ substr .File.UniqueID 0 7 }}
draft: true
author:
name:
link:
name: James
link: https://www.jamesflare.com
email:
avatar:
avatar: /site-logo.avif
description:
keywords:
license:
@@ -36,7 +37,7 @@ lightgallery: false
password:
message:
repost:
enable: true
enable: false
url:
# See details front matter: https://fixit.lruihao.cn/documentation/content-management/introduction/#front-matter

View File

@@ -24,4 +24,30 @@ details summary strong {
.aside-collection {
margin-top: 64px;
}
}
// Short code - columns
.md-columns {
display: flex;
flex-wrap: wrap;
margin-left: -1rem;
margin-right: -1rem;
>div {
flex: 1 1;
margin: 1rem 0;
min-width: 100px;
max-width: 100%;
padding: 0 1rem;
}
.markdown-inner {
margin-top: 0;
margin-bottom: 0;
}
}
.katex-display {
overflow-y: clip;
}

View File

@@ -0,0 +1,19 @@
.md-columns {
display: flex;
flex-wrap: wrap;
margin-left: -1rem;
margin-right: -1rem;
>div {
flex: 1 1;
margin: 1rem 0;
min-width: 100px;
max-width: 100%;
padding: 0 1rem;
}
.markdown-inner {
margin-top: 0;
margin-bottom: 0;
}
}

View File

@@ -1,21 +1,32 @@
# =====================================================================================
# It's recommended to use Alternate Theme Config to configure FixIt
# Modifying this file may result in merge conflict
# There are currently some restrictions to what a theme component can configure:
# params, menu, outputformats and mediatypes
# =====================================================================================
# -------------------------------------------------------------------------------------
# Hugo Configuration
# See: https://gohugo.io/getting-started/configuration/
# -------------------------------------------------------------------------------------
#ignoreLogs = ['error-get-gh-repo', 'error-get-remote-json']
# website title
title = "FlareBlog"
# Hostname (and path) to the root
baseURL = "https://www.jamesflare.com/"
# theme list
theme = "FixIt" # enable in your site config file
theme = ["FixIt", "component-projects", "hugo-embed-pdf-shortcode"]
enableInlineShortcodes = true
defaultContentLanguage = "en"
# language code ["en", "zh-CN", "fr", "pl", ...]
languageCode = "en"
# language name ["English", "简体中文", "Français", "Polski", ...]
languageName = "English"
# whether to include Chinese/Japanese/Korean
hasCJKLanguage = true
# default amount of posts in each pages
paginate = 12
[pagination]
pagerSize = 12
# copyright description used only for seo schema
copyright = ""
# whether to use robots.txt
@@ -25,9 +36,6 @@ enableGitInfo = false
# whether to use emoji code
enableEmoji = true
defaultContentLanguage = "en"
defaultContentLanguageInSubdir = true
# -------------------------------------------------------------------------------------
# Related content Configuration
# See: https://gohugo.io/content-management/related/
@@ -94,18 +102,46 @@ defaultContentLanguageInSubdir = true
########## necessary configurations ##########
guessSyntax = true
# Goldmark is from Hugo 0.60 the default library used for Markdown
# https://gohugo.io/getting-started/configuration-markup/#goldmark
[markup.goldmark]
duplicateResourceFiles = false
[markup.goldmark.extensions]
definitionList = true
footnote = true
linkify = true
strikethrough = true
linkifyProtocol = 'https'
strikethrough = false
table = true
taskList = true
typographer = true
[markup.goldmark.extensions.passthrough]
enable = true
[markup.goldmark.extensions.passthrough.delimiters]
block = [['\[', '\]'], ['$$', '$$']]
inline = [['\(', '\)'], ['$', '$']]
# https://gohugo.io/getting-started/configuration-markup/#extras
[markup.goldmark.extensions.extras]
[markup.goldmark.extensions.extras.delete]
enable = true
[markup.goldmark.extensions.extras.insert]
enable = true
[markup.goldmark.extensions.extras.mark]
enable = true
[markup.goldmark.extensions.extras.subscript]
enable = true
[markup.goldmark.extensions.extras.superscript]
enable = true
# TODO passthrough refactor https://gohugo.io/getting-started/configuration-markup/#parserattributeblock
# TODO hugo 0.122.0 https://gohugo.io/content-management/mathematics/
[markup.goldmark.parser]
[markup.goldmark.parser.attribute]
block = true
title = true
[markup.goldmark.renderer]
hardWraps = false
# whether to use HTML tags directly in the document
unsafe = true
xhtml = false
# Table Of Contents settings
[markup.tableOfContents]
ordered = false
@@ -137,7 +173,7 @@ defaultContentLanguageInSubdir = true
# -------------------------------------------------------------------------------------
[privacy]
[privacy.twitter]
[privacy.x]
enableDNT = true
[privacy.youtube]
privacyEnhanced = true
@@ -161,11 +197,6 @@ defaultContentLanguageInSubdir = true
# -------------------------------------------------------------------------------------
[outputFormats]
# Options to make output .md files
[outputFormats.MarkDown]
mediaType = "text/markdown"
isPlainText = true
isHTML = false
# FixIt 0.3.0 | NEW Options to make output /archives/index.html file
[outputFormats.archives]
path = "archives"
@@ -174,6 +205,7 @@ defaultContentLanguageInSubdir = true
isPlainText = false
isHTML = true
permalinkable = true
notAlternative = true
# FixIt 0.3.0 | NEW Options to make output /offline/index.html file
[outputFormats.offline]
path = "offline"
@@ -182,18 +214,29 @@ defaultContentLanguageInSubdir = true
isPlainText = false
isHTML = true
permalinkable = true
notAlternative = true
# FixIt 0.3.0 | NEW Options to make output readme.md file
[outputFormats.README]
[outputFormats.readme]
baseName = "readme"
mediaType = "text/markdown"
isPlainText = true
isHTML = false
notAlternative = true
# FixIt 0.3.0 | CHANGED Options to make output baidu_urls.txt file
[outputFormats.baidu_urls]
baseName = "baidu_urls"
mediaType = "text/plain"
isPlainText = true
isHTML = false
notAlternative = true
# FixIt 0.3.10 | NEW Options to make output search.json file
[outputFormats.search]
baseName = "search"
mediaType = "application/json"
rel = "search"
isPlainText = true
isHTML = false
permalinkable = true
# -------------------------------------------------------------------------------------
# Customizing Output Formats
@@ -207,11 +250,11 @@ defaultContentLanguageInSubdir = true
# taxonomy: ["HTML", "RSS"]
# term: ["HTML", "RSS"]
[outputs]
home = ["HTML", "RSS", "JSON", "archives"]
page = ["HTML", "MarkDown"]
section = ["HTML", "RSS"]
taxonomy = ["HTML"]
term = ["HTML", "RSS"]
home = ["html", "rss", "archives", "offline", "search"]
page = ["html", "markdown"]
section = ["html", "rss"]
taxonomy = ["html"]
term = ["html", "rss"]
# -------------------------------------------------------------------------------------
# Taxonomies Configuration
@@ -248,8 +291,8 @@ defaultContentLanguageInSubdir = true
enablePWA = false
# FixIt 0.2.14 | NEW whether to add external Icon for external links automatically
externalIcon = false
# FixIt 0.3.0 | NEW whether to reverse the order of the navigation menu
navigationReverse = false
# FixIt 0.3.13 | NEW whether to capitalize titles
capitalizeTitles = true
# FixIt 0.3.0 | NEW whether to add site title to the title of every page
# remember to set up your site title in `hugo.toml` (e.g. title = "title")
withSiteTitle = true
@@ -258,6 +301,8 @@ defaultContentLanguageInSubdir = true
# FixIt 0.3.0 | NEW whether to add site subtitle to the title of index page
# remember to set up your site subtitle by `params.header.subtitle.name`
indexWithSubtitle = false
# FixIt 0.3.13 | NEW whether to show summary in plain text
summaryPlainify = false
# FixIt 0.2.14 | NEW FixIt will, by default, inject a theme meta tag in the HTML head on the home page only.
# You can turn it off, but we would really appreciate if you dont, as this is a good way to watch FixIt's popularity on the rise.
disableThemeInject = false
@@ -360,19 +405,26 @@ defaultContentLanguageInSubdir = true
enable = false
sticky = false
showHome = false
# FixIt 0.3.13 | NEW
separator = "/"
capitalize = true
# FixIt 0.3.10 | NEW Post navigation config
[params.navigation]
# whether to show the post navigation in section pages scope
inSection = false
# whether to reverse the next/previous post navigation order
reverse = false
# Footer config
[params.footer]
enable = true
# FixIt 0.2.17 | CHANGED Custom content (HTML format is supported)
# For advanced use, see parameter `params.customFilePath.footer`
custom = ""
# whether to show copyright info
copyright = true
# whether to show the author
author = true
# Site creation year
since = "2022"
since = ""
# FixIt 0.2.12 | NEW Public network security only in China (HTML format is supported)
gov = ""
# ICP info only in China (HTML format is supported)
@@ -412,8 +464,12 @@ defaultContentLanguageInSubdir = true
paginate = 20
# date format (month and day)
dateFormat = "01-02"
# amount of RSS pages
rss = 30
# FixIt 0.3.10 | NEW Section feed config for RSS, Atom and JSON feed.
[params.section.feed]
# The number of posts to include in the feed. If set to -1, all posts.
limit = -1
# whether to show the full text content in feed.
fullText = false
# FixIt 0.2.13 | NEW recently updated pages config
# TODO refactor to support archives, section, taxonomy and term
[params.section.recentlyUpdated]
@@ -422,14 +478,26 @@ defaultContentLanguageInSubdir = true
days = 30
maxCount = 10
# List (category or tag) page config
# Term list (category or tag) page config
[params.list]
# special amount of posts in each list page
paginate = 20
# date format (month and day)
dateFormat = "01-02"
# amount of RSS pages
rss = 10
# FixIt 0.3.10 | NEW Term list feed config for RSS, Atom and JSON feed.
[params.list.feed]
# The number of posts to include in the feed. If set to -1, all posts.
limit = -1
# whether to show the full text content in feed.
fullText = false
# FixIt 0.3.13 | NEW recently updated pages config for archives, section and term list
[params.recentlyUpdated]
archives = true
section = true
list = true
days = 30
maxCount = 10
# FixIt 0.2.17 | NEW TagCloud config for tags page
[params.tagcloud]
@@ -441,8 +509,6 @@ defaultContentLanguageInSubdir = true
# Home page config
[params.home]
# amount of RSS pages
rss = 10
# Home page profile
[params.home.profile]
enable = true
@@ -475,7 +541,7 @@ defaultContentLanguageInSubdir = true
Twitter = ""
Instagram = ""
Facebook = ""
Telegram = "ossOpration"
Telegram = ""
Medium = ""
Gitlab = ""
Youtubelegacy = ""
@@ -554,9 +620,18 @@ defaultContentLanguageInSubdir = true
TryHackMe = ""
Douyin = ""
TikTok = ""
Credly = ""
Phone = ""
Email = "jamesflare1212@gmail.com"
RSS = true
# custom social links like the following
# [params.social.twitter]
# id = "lruihao"
# weight = 3
# prefix = "https://twitter.com/"
# Title = "Twitter"
# [social.twitter.icon]
# class = "fa-brands fa-x-twitter fa-fw"
# Page config
[params.page]
@@ -574,7 +649,7 @@ defaultContentLanguageInSubdir = true
twemoji = false
# whether to enable lightgallery
# FixIt 0.2.18 | CHANGED if set to "force", images in the content will be forced to shown as the gallery.
lightgallery = false
lightgallery = true
# whether to enable the ruby extended syntax
ruby = true
# whether to enable the fraction extended syntax
@@ -586,9 +661,9 @@ defaultContentLanguageInSubdir = true
# whether to show link to Raw Markdown content of the post
linkToMarkdown = true
# FixIt 0.3.0 | NEW whether to show link to view source code of the post
linkToSource = true
linkToSource = false
# FixIt 0.3.0 | NEW whether to show link to edit the post
linkToEdit = true
linkToEdit = false
# FixIt 0.3.0 | NEW whether to show link to report issue for the post
linkToReport = true
# whether to show the full text content in RSS
@@ -605,7 +680,7 @@ defaultContentLanguageInSubdir = true
# FixIt 0.2.17 | NEW end of post flag
endFlag = ""
# FixIt 0.2.18 | NEW whether to enable instant.page
instantPage = false
instantPage = true
# FixIt 0.3.0 | NEW whether to enable collection list at the sidebar
collectionList = true
# FixIt 0.3.0 | NEW whether to enable collection navigation at the end of the post
@@ -627,7 +702,7 @@ defaultContentLanguageInSubdir = true
position = "right"
# FixIt 0.2.13 | NEW Display a message at the beginning of an article to warn the reader that its content might be expired
[params.page.expirationReminder]
enable = false
enable = true
# Display the reminder if the last modified time is more than 90 days ago
reminder = 90
# Display warning if the last modified time is more than 180 days ago
@@ -636,10 +711,13 @@ defaultContentLanguageInSubdir = true
closeComment = false
# FixIt 0.3.0 | NEW page heading config
[params.page.heading]
# used with `markup.tableOfContents.ordered` parameter
# FixIt 0.3.3 | NEW whether to capitalize automatic text of headings
capitalize = false
[params.page.heading.number]
# whether to enable auto heading numbering
enable = false
# FixIt 0.3.3 | NEW only enable in main section pages (default is posts)
onlyMainSection = true
[params.page.heading.number.format]
h1 = "{title}"
h2 = "{h2} {title}"
@@ -662,10 +740,12 @@ defaultContentLanguageInSubdir = true
mhchem = true
# Code config
[params.page.code]
# FixIt 0.3.9 | NEW whether to enable the code wrapper
enable = true
# whether to show the copy button of the code block
copy = true
# FixIt 0.2.13 | NEW whether to show the edit button of the code block
edit = true
edit = false
# the maximum number of lines of displayed code by default
maxShownLines = 10
# Mapbox GL JS config (https://docs.mapbox.com/mapbox-gl-js)
@@ -780,8 +860,8 @@ defaultContentLanguageInSubdir = true
appKey = ""
placeholder = ""
avatar = "mp"
meta = ""
requiredFields = ""
meta = ['nick','mail','link']
requiredFields = []
pageSize = 10
lang = ""
visitor = true
@@ -813,6 +893,7 @@ defaultContentLanguageInSubdir = true
texRenderer = false # FixIt 0.2.16 | NEW
search = false # FixIt 0.2.16 | NEW
recaptchaV3Key = "" # FixIt 0.2.16 | NEW
turnstileKey = "" # FixIt 0.3.8 | NEW
reaction = false # FixIt 0.2.18 | NEW
# Facebook comment config (https://developers.facebook.com/docs/plugins/comments)
[params.page.comment.facebook]
@@ -910,6 +991,25 @@ defaultContentLanguageInSubdir = true
# For values, see https://mermaid.js.org/config/theming.html#available-themes
themes = ["default", "dark"]
# FixIt 0.3.13 | NEW Admonitions custom config
# See https://fixit.lruihao.cn/documentation/content-management/shortcodes/extended/admonition/#custom-admonitions
# syntax: <type> = <icon>
[params.admonition]
# ban = "fa-solid fa-ban"
# FixIt 0.3.14 | NEW Task lists custom config
# See https://fixit.lruihao.cn/documentation/content-management/advanced/#custom-task-lists
# syntax: <sign> = <icon>
[params.taskList]
# tip = "fa-regular fa-lightbulb"
# FixIt 0.3.15 | NEW version shortcode config
[params.repoVersion]
# url prefix for the release tag
url = "https://github.com/hugo-fixit/FixIt/releases/tag/v"
# project name
name = "FixIt"
# FixIt 0.2.12 | NEW PanguJS config
[params.pangu]
# For Chinese writing
@@ -939,11 +1039,16 @@ defaultContentLanguageInSubdir = true
# FixIt 0.2.13 | NEW watermark's fontFamily
fontFamily = "inherit"
# FixIt 0.2.12 | NEW Busuanzi count
[params.ibruce]
# FixIt 0.3.10 | NEW Busuanzi count
[params.busuanzi]
# whether to enable busuanzi count
enable = false
# Enable in post meta
enablePost = false
# busuanzi count core script source. Default is https://vercount.one/js
source = "https://vercount.one/js"
# whether to show the site views
siteViews = true
# whether to show the page views
pageViews = true
# Site verification code config for Google/Bing/Yandex/Pinterest/Baidu/360/Sogou
[params.verification]
@@ -964,7 +1069,7 @@ defaultContentLanguageInSubdir = true
# Analytics config
[params.analytics]
enable = false
enable = true
# Google Analytics
[params.analytics.google]
id = ""
@@ -975,6 +1080,31 @@ defaultContentLanguageInSubdir = true
id = ""
# server url for your tracker if you're self hosting
server = ""
# FixIt 0.3.16 | NEW Baidu Analytics
[params.analytics.baidu]
id = ""
# FixIt 0.3.16 | NEW Umami Analytics
[params.analytics.umami]
data_website_id = "c687e659-a8de-4d17-a794-0fb82dd085f5"
src = "https://track.jamesflare.com/script.js"
data_host_url = "https://track.jamesflare.com"
data_domains = ""
# FixIt 0.3.16 | NEW Plausible Analytics
[params.analytics.plausible]
data_domain = ""
src = ""
# FixIt 0.3.16 | NEW Cloudflare Analytics
[params.analytics.cloudflare]
token = ""
# FixIt 0.3.16 | NEW Splitbee Analytics
[params.analytics.splitbee]
enable = false
# no cookie mode
no_cookie = true
# respect the do not track setting of the browser
do_not_track = true
# token(optional), more info on https://splitbee.io/docs/embed-the-script
data_token = ""
# Cookie consent config
[params.cookieconsent]
@@ -1043,23 +1173,34 @@ defaultContentLanguageInSubdir = true
# ["barber-shop", "big-counter", "bounce", "center-atom", "center-circle", "center-radar", "center-simple",
# "corner-indicator", "fill-left", "flash", "flat-top", "loading-bar", "mac-osx", "material", "minimal"]
theme = "minimal"
# FixIt 0.2.17 | NEW Define custom file paths
# Create your custom files in site directory `layouts/partials/custom` and uncomment needed files below
[params.customFilePath]
# aside = "custom/aside.html"
# profile = "custom/profile.html"
# footer = "custom/footer.html"
# FixIt 0.3.10 | NEW Global Feed config for RSS, Atom and JSON feed.
[params.feed]
# The number of posts to include in the feed. If set to -1, all posts.
limit = 10
# whether to show the full text content in feed.
fullText = true
# FixIt 0.3.12 | NEW Custom partials config
# Custom partials must be stored in the /layouts/partials/ directory.
# Depends on open custom blocks https://fixit.lruihao.cn/references/blocks/
[params.customPartials]
head = []
menuDesktop = []
menuMobile = []
profile = []
aside = []
comment = []
footer = []
widgets = []
assets = []
postFooterBefore = []
postFooterAfter = []
# FixIt 0.2.15 | NEW Developer options
# Select the scope named `public_repo` to generate personal access token,
# Configure with environment variable `HUGO_PARAMS_GHTOKEN=xxx`, see https://gohugo.io/functions/os/getenv/#examples
[params.dev]
enable = false
# Check for updates
c4u = false
# Please do not expose to public!
githubToken = ""
# Mobile Devtools config
[params.dev.mDevtools]
enable = false
# "vConsole", "eruda" supported
type = "vConsole"
c4u = false

View File

@@ -76,6 +76,6 @@
parent = "about"
name = "About Me"
url = "/me/"
weight = 5
weight = 10
[main.params]
icon = "fa-solid fa-clipboard-user"

View File

@@ -76,6 +76,6 @@
parent = "about"
name = "关于我"
url = "/me/"
weight = 5
weight = 10
[main.params]
icon = "fa-solid fa-clipboard-user"

View File

@@ -20,4 +20,12 @@ I am a fan of technology, design, and innovation. Recently, I am watching Starsh
## What is this blog?
This is a personal blog where I write about my thoughts and experiences. I hope you find it interesting and useful.
This is a personal blog where I write about my thoughts and experiences. I hope you find it interesting and useful.
## My Projects
{{< gh-repo-card-container >}}
{{< gh-repo-card repo="JamesFlare1212/FlareBlog" >}}
{{< gh-repo-card repo="JamesFlare1212/SCDocs" >}}
{{< gh-repo-card repo="JamesFlare1212/NancyPortfolio" >}}
{{< /gh-repo-card-container >}}

View File

@@ -40,7 +40,7 @@ lightgallery: false
password:
message:
repost:
enable: true
enable: false
url:
# See details front matter: https://fixit.lruihao.cn/documentation/content-management/introduction/#front-matter

View File

@@ -36,7 +36,7 @@ lightgallery: false
password:
message:
repost:
enable: true
enable: false
url:
# See details front matter: https://fixit.lruihao.cn/documentation/content-management/introduction/#front-matter

View File

@@ -40,8 +40,8 @@ seo:
images: []
repost:
enable: true
url: ""
enable: false
url:
# See details front matter: https://fixit.lruihao.cn/theme-documentation-content/#front-matter
---

View File

@@ -0,0 +1,84 @@
---
title: CSCI 1100 - Test Crib Sheets
subtitle:
date: 2024-12-08T22:17:36-05:00
lastmod: 2024-12-08T22:17:36-05:00
slug: csci-1100-crib-sheets
draft: false
author:
name: James
link: https://www.jamesflare.com
email:
avatar: /site-logo.avif
description: This post shares the crib sheets I have been used in Test 2, Test 3 and Final of CSCI 1100.
keywords:
license:
comment: true
weight: 0
tags:
- CSCI 1100
- Exam
- RPI
- Python
- Programming
categories:
- Programming
collections:
- CSCI 1100
hiddenFromHomePage: false
hiddenFromSearch: false
hiddenFromRss: false
hiddenFromRelated: false
summary: This post shares the crib sheets I have been used in Test 2, Test 3 and Final of CSCI 1100.
resources:
- name: featured-image
src: featured-image.jpg
- name: featured-image-preview
src: featured-image-preview.jpg
toc: true
math: false
lightgallery: false
password:
message:
repost:
enable: false
url:
# See details front matter: https://fixit.lruihao.cn/documentation/content-management/introduction/#front-matter
---
<!--more-->
> [!TIP]
> You can edit this PDF with Adobe Photoshop. Yes, you are correct! I made this PDF by Photoshop. The font have been used in these crib sheets is [Intel One Mono](https://github.com/intel/intel-one-mono).
## Test 1 Crib Sheet
> [!NOTE]
> I didn't use a crib sheet in test 1. So, not crib sheets here.
## Test 2 Crib Sheet
<div style="width: 100%; max-width: 600px; margin: 0 auto; display: block;">
<embed src="Test 2 Crib Sheet A.pdf" type="application/pdf" width="100%" height="500px">
</div>
<div style="width: 100%; max-width: 600px; margin: 0 auto; display: block;">
<embed src="Test 2 Crib Sheet B.pdf" type="application/pdf" width="100%" height="500px">
</div>
## Test 3 Crib Sheet
<div style="width: 100%; max-width: 600px; margin: 0 auto; display: block;">
<embed src="Test 3 Crib Sheet A.pdf" type="application/pdf" width="100%" height="500px">
</div>
<div style="width: 100%; max-width: 600px; margin: 0 auto; display: block;">
<embed src="Test 3 Crib Sheet B.pdf" type="application/pdf" width="100%" height="500px">
</div>
## Final Crib Sheet
<div style="width: 100%; max-width: 600px; margin: 0 auto; display: block;">
<embed src="Final Crib Sheet A.pdf" type="application/pdf" width="100%" height="500px">
</div>

View File

@@ -40,7 +40,7 @@ lightgallery: false
password:
message:
repost:
enable: true
enable: false
url:
# See details front matter: https://fixit.lruihao.cn/documentation/content-management/introduction/#front-matter

View File

@@ -40,7 +40,7 @@ lightgallery: false
password:
message:
repost:
enable: true
enable: false
url:
# See details front matter: https://fixit.lruihao.cn/documentation/content-management/introduction/#front-matter

View File

@@ -0,0 +1,139 @@
---
title: CSCI 1100 - Test 4 Overview and Practice Questions
subtitle:
date: 2024-04-26T03:54:07-04:00
slug: csci-1100-exam-4-overview
draft: true
author:
name: James
link: https://www.jamesflare.com
email:
avatar: /site-logo.avif
description:
keywords: ["CSCI 1100","Computer Science","Test 4","Practice Questions", "Python"]
license:
comment: true
weight: 0
tags:
- CSCI 1100
- Exam
- RPI
- Python
- Programming
categories:
- Programming
collections:
- CSCI 1100
hiddenFromHomePage: false
hiddenFromSearch: false
hiddenFromRss: false
hiddenFromRelated: false
summary:
resources:
- name: featured-image
src: featured-image.jpg
- name: featured-image-preview
src: featured-image-preview.jpg
toc: true
math: false
lightgallery: false
password:
message:
repost:
enable: false
url:
# See details front matter: https://fixit.lruihao.cn/documentation/content-management/introduction/#front-matter
---
<!--more-->
## Overview
- The final exam will be held Monday, April 29, 2024 from 6:30 pm - 8:30 pm. Note that this will be a two-hour exam.
- Most students will take the exam from 6:30 pm - 8:30 pm (120 minutes). The exam will be given in 308 DCC for most students.
- Students who provided an accommodation letter indicating the need for extra time or a quiet location will be given extra time beyond the 2 hour base. Shianne Hulbert will send you a reminder for your time and location. Use whatever information she sends you. It overrides any assignments given you on Submitty. If you show up at your Submitty location or time, you will be allowed to take the exam, but you will lose the accommodations.
- Students MUST:
- Go to their assigned rooms.
- Bring their IDs to the exam.
- Sit in the correct room/section.
- Put away all calculators, phones, etc. and take off/out all headphones and earbuds
Failing to do one of these may result in a 20 point penalty on the exam score. Failure to do all can cost up to 80 points.
- During the exam, if you are doubtful/confused about a problem, simply state your assumptions and/or interpretation as comments right before your code and write your solution accordingly.
- Exam coverage is the entire semester, except for the following:
- JSON data format
- Images
You do not need to know the intricacies of tkinter GUI formatting, but you should understand the GUI code structure we outlined (Lecture Notes and Class Code), be able to trace through event driven code and write small methods that are invoked by the GUI. Consider the lecture exercises for Lecture 22 and the modifications you made to the BallDraw class during Lab 11 for practice.
- Please review lecture notes, class exercises, labs, homework, practice programs, and tests, working through problems on your own before looking at the solutions.
- You are expected to abide by the following Honor code when appearing for this exam:
"On my honor, I have neither given nor received any aid on this exam."
- As part of our regular class time on Monday April 22, we will answer questions about the course material, so bring your questions!
- There are often study events held on campus, for example UPE often holds tutoring sessions. I do not know of any specific events right now, but we will post anything we learn to the Submitty discussion forum. Please monitor the channel if you are looking for help.
- What follows are a few additional practice problems. These are by no means comprehensive, so rework problems from earlier in the semester. All the material from tests 1, 2, and 3 are also fair game. This is a comprehensive final exam.
- We have separately provided Spring 2017's final exam.
## Questions
### Merge Without Extend
> Write a version of `merge` that does all of the work inside the `while` loop and does not use the `extend`.
### Three Way Merge
> Using what you learned from writing the solution to the previous problem, write a function to merge three sorted lists. For example:
>
> ```python
> print(three_way_merge([2, 3, 4, 4, 4, 5], [1, 5, 6, 9], [6, 9, 13]))
> ```
>
> Should output:
>
> ```
> [1, 2, 3, 4, 4, 4, 5, 5, 6, 6, 9, 9, 13]
> ```
### Score Range Counts
> Given a list of test scores, where the maximum score is 100, write code that prints the number of scores that are in the range 0-9, 10-19, 20-29, ... 80-89, 90-100. Try to think of several ways to do this. Outline test cases you should add.
>
> For example, given the list of scores:
>
> ```python
> scores = [12, 90, 100, 52, 56, 76, 92, 83, 39, 77, 73, 70, 80]
> ```
>
> The output should be something like:
>
> ```
> [0,9]: 0
> [10,19]: 1
> [20,29]: 0
> [30,39]: 1
> [40,49]: 0
> [50,59]: 2
> [60,69]: 0
> [70,79]: 4
> [80,89]: 2
> [90,100]: 3
> ```
### Closest 10 Values
> Given a list of floating point values containing at least 10 values, how do you find the 10 values that are closest to each other? In other words, find the smallest interval that contains 10 values. By definition the minimum and maximum of this interval will be values in the original list. These two values and the 8 in between constitute the desired answer. This is a bit of a challenging variation on earlier problems from the semester. Start by outlining your approach. Outline the test cases. For example, given the list:
>
> ```python
> values = [1.2, 5.3, 1.1, 8.7, 9.5, 11.1, 2.5, 3, 12.2, 8.8, 6.9, 7.4,
> 0.1, 7.7, 9.3, 10.1, 17, 1.1]
> ```
>
> The list of the closest 10 should be:
>
> ```
> [6.9, 7.4, 7.7, 8.7, 8.8, 9.3, 9.5, 10.1, 11.1, 12.2]
> ```

View File

@@ -10,7 +10,7 @@ author:
email:
avatar: /site-logo.avif
description: This blog post provides a detailed overview of a Python programming homework assignment, which includes creating a Mad Libs game, calculating speed and pace, and generating a framed box with user-specified dimensions.
keywords: ["Python", "programming", "homework", "Mad Libs", "speed calculation", "framed box"]
keywords: ["Python", "Programming", "Homework", "Mad Libs", "Speed calculation", "Framed box"]
license:
comment: true
weight: 0
@@ -40,7 +40,7 @@ lightgallery: false
password:
message:
repost:
enable: true
enable: false
url:
# See details front matter: https://fixit.lruihao.cn/documentation/content-management/introduction/#front-matter
@@ -120,8 +120,6 @@ We will test your code for the values used in our examples as well as a range of
{{< link href="HW1.zip" content="HW1.zip" title="Download HW1.zip" download="HW1.zip" card=true >}}
***
## Solution
### hw1_part1.py

View File

@@ -40,7 +40,7 @@ lightgallery: false
password:
message:
repost:
enable: true
enable: false
url:
# See details front matter: https://fixit.lruihao.cn/documentation/content-management/introduction/#front-matter
@@ -182,8 +182,6 @@ Test your code well and when you are sure that it works, please submit it as a f
{{< link href="HW2.zip" content="HW2.zip" title="Download HW2.zip" download="HW2.zip" card=true >}}
***
## Solution
### hw2_part1.py

View File

@@ -40,7 +40,7 @@ lightgallery: false
password:
message:
repost:
enable: true
enable: false
url:
# See details front matter: https://fixit.lruihao.cn/documentation/content-management/introduction/#front-matter

View File

@@ -40,7 +40,7 @@ lightgallery: false
password:
message:
repost:
enable: true
enable: false
url:
# See details front matter: https://fixit.lruihao.cn/documentation/content-management/introduction/#front-matter
@@ -154,11 +154,6 @@ Input key words and state abbreviations may be typed in upper or lower case and
### hw4_part1.py
```python
"""
This script is used to test password strength based on certain criteria.
Author: Jinshan Zhou
"""
import hw4_util
if __name__ == "__main__":

View File

@@ -40,7 +40,7 @@ lightgallery: false
password:
message:
repost:
enable: true
enable: false
url:
# See details front matter: https://fixit.lruihao.cn/documentation/content-management/introduction/#front-matter

Binary file not shown.

View File

@@ -0,0 +1,436 @@
---
title: CSCI 1100 - Homework 6 - Files, Sets and Document Analysis
subtitle:
date: 2024-04-13T15:36:47-04:00
slug: csci-1100-hw-6
draft: false
author:
name: James
link: https://www.jamesflare.com
email:
avatar: /site-logo.avif
description: This blog post introduces a Python programming assignment for analyzing and comparing text documents using natural language processing techniques, such as calculating word length, distinct word ratios, and Jaccard similarity between word sets and pairs.
keywords: ["Python", "natural language processing", "text analysis", "document comparison"]
license:
comment: true
weight: 0
tags:
- CSCI 1100
- Homework
- RPI
- Python
- Programming
categories:
- Programming
collections:
- CSCI 1100
hiddenFromHomePage: false
hiddenFromSearch: false
hiddenFromRss: false
hiddenFromRelated: false
summary: This blog post introduces a Python programming assignment for analyzing and comparing text documents using natural language processing techniques, such as calculating word length, distinct word ratios, and Jaccard similarity between word sets and pairs.
resources:
- name: featured-image
src: featured-image.jpg
- name: featured-image-preview
src: featured-image-preview.jpg
toc: true
math: true
lightgallery: false
password:
message:
repost:
enable: false
url:
# See details front matter: https://fixit.lruihao.cn/documentation/content-management/introduction/#front-matter
---
<!--more-->
## Overview
This homework is worth 100 points total toward your overall homework grade. It is due Thursday, March 21, 2024 at 11:59:59 pm. As usual, there will be a mix of autograded points, instructor test case points, and TA graded points. There is just one "part" to this homework.
See the handout for Submission Guidelines and Collaboration Policy for a discussion on grading and on what is considered excessive collaboration. These rules will be in force for the rest of the semester.
You will need the data files we provide in `hw6_files.zip`, so be sure to download this file from the Course Materials section of Submitty and unzip it into your directory for HW 6. The zip file contains data files and example input / output for your program.
## Problem Introduction
There are many software systems for analyzing the style and sophistication of written text and even deciding if two documents were authored by the same individual. The systems analyze documents based on the sophistication of word usage, frequently used words, and words that appear closely together. In this assignment you will write a Python program that reads two files containing the text of two different documents, analyzes each document, and compares the documents. The methods we use are simple versions of much more sophisticated methods that are used in practice in the field known as natural language processing (NLP).
### Files and Parameters
Your program must work with three files and an integer parameter.
The name of the first file will be `stop.txt` for every run of your program, so you don't need to ask the user for it. The file contains what we will refer to as "stop words" — words that should be ignored. You must ensure that the file `stop.txt` is in the same folder as your `hw6_sol.py` python file. We will provide one example of it, but may use others in testing your code.
You must request the names of two documents to analyze and compare and an integer "maximum separation" parameter, which will be referred to as `max_sep` here. The requests should look like:
```text
Enter the first file to analyze and compare ==> doc1.txt
doc1.txt
Enter the second file to analyze and compare ==> doc2.txt
doc2.txt
Enter the maximum separation between words in a pair ==> 2
2
```
### Parsing
The job of parsing for this homework is to break a file of text into a single list of consecutive words. To do this, the contents from a file should first be split up into a list of strings, where each string contains consecutive non-white-space characters. Then each string should have all non-letters removed and all letters converted to lower case. For example, if the contents of a file (e.g., `doc1.txt`) are read to form the string (note the end-of-line and tab characters)
```python
s = " 01-34 can't 42weather67 puPPy, \r \t and123\n Ch73%allenge 10ho32use,.\n"
```
then the splitting should produce the list of strings
```python
['01-34', "can't", '42weather67', 'puPPy,', 'and123', 'Ch73%allenge', '10ho32use,.']
```
and this should be split into the list of (non-empty) strings
```python
['cant', 'weather', 'puppy', 'and', 'challenge', 'house']
```
Note that the first string, `'01-34'` is completely removed because it has no letters. All three files — `stop.txt` and the two document files called `doc1.txt` and `doc2.txt` above — should be parsed this way.
Once this parsing is done, the list resulting from parsing the file `stop.txt` should be converted to a set. This set contains what are referred to in NLP as "stop words" — words that appear so frequently in text that they should be ignored.
The files `doc1.txt` and `doc2.txt` contain the text of the two documents to compare. For each, the list returned from parsing should be further modified by removing any stop words. Continuing with our example, if `'cant'` and `'and'` are stop words, then the word list should be reduced to
```python
['weather', 'puppy', 'challenge', 'house']
```
Words like "and" are almost always in stop lists, while "cant" (really, the contraction "can't") is in some. Note that the word lists built from `doc1.txt` and `doc2.txt` should be kept as lists because the word ordering is important.
### Analyze Each Document's Word List
Once you have produced the word list with stop words removed, you are ready to analyze the word list. There are many ways to do this, but here are the ones required for this assignment:
1. Calculate and output the average word length, accurate to two decimal places. The idea here is that word length is a rough indicator of sophistication.
2. Calculate and output, accurate to three decimal places, the ratio between the number of distinct words and the total number of words. This is a measure of the variety of language used (although it must be remembered that some authors use words and phrases repeatedly to strengthen their message.)
3. For each word length starting at 1, find the set of words having that length. Print the length, the number of different words having that length, and at most six of these words. If for a certain length, there are six or fewer words, then print all six, but if there are more than six print the first three and the last three in alphabetical order. For example, suppose our simple text example above were expanded to the list
```python
['weather', 'puppy', 'challenge', 'house', 'whistle', 'nation', 'vest',
'safety', 'house', 'puppy', 'card', 'weather', 'card', 'bike',
'equality', 'justice', 'pride', 'orange', 'track', 'truck',
'basket', 'bakery', 'apples', 'bike', 'truck', 'horse', 'house',
'scratch', 'matter', 'trash']
```
Then the output should be
```text
1: 0:
2: 0:
3: 0:
4: 3: bike card vest
5: 7: horse house pride ... track trash truck
6: 7: apples bakery basket ... nation orange safety
7: 4: justice scratch weather whistle
8: 1: equality
9: 1: challenge
```
4. Find the distinct word pairs for this document. A word pair is a two-tuple of words that appear `max_sep` or fewer positions apart in the document list. For example, if the user input resulted in `max_sep == 2`, then the first six word pairs generated will be:
```python
('puppy', 'weather'), ('challenge', 'weather'),
('challenge', 'puppy'), ('house', 'puppy'),
('challenge', 'house'), ('challenge', 'whistle')
```
Your program should output the total number of distinct word pairs. (Note that `('puppy', 'weather')` and `('weather', 'puppy')` should be considered the same word pair.) It should also output the first 5 word pairs in alphabetical order (as opposed to the order they are formed, which is what is written above) and the last 5 word pairs. You may assume, without checking, that there are enough words to generate these pairs. Here is the output for the longer example above (assuming that the name of the file they are read from is `ex2.txt`):
```text
Word pairs for document ex2.txt
54 distinct pairs
apples bakery
apples basket
apples bike
apples truck
bakery basket
...
puppy weather
safety vest
scratch trash
track truck
vest whistle
```
5. Finally, as a measure of how distinct the word pairs are, calculate and output, accurate to three decimal places, the ratio of the number of distinct word pairs to the total number of word pairs.
### Compare Documents
The last step is to compare the documents for complexity and similarity. There are many possible measures, so we will implement just a few.
Before we do this we need to define a measure of similarity between two sets. A very common one, and the one we use here, is called Jaccard Similarity. This is a sophisticated-sounding name for a very simple concept (something that happens a lot in computer science and other STEM disciplines). If A and B are two sets, then the Jaccard similarity is just
$$
J(A, B) = \frac{|A \cap B)|}{|A \cup B)|}
$$
In plain English it is the size of the intersection between two sets divided by the size of their union. As examples, if $A$ and $B$ are equal, $J(A, B)$ = 1, and if A and B are disjoint, $J(A, B)$ = 0. As a special case, if one or both of the sets is empty the measure is 0. The Jaccard measure is quite easy to calculate using Python set operations.
Here are the comparison measures between documents:
1. Decide which has a greater average word length. This is a rough measure of which uses more sophisticated language.
2. Calculate the Jaccard similarity in the overall word use in the two documents. This should be accurate to three decimal places.
3. Calculate the Jaccard similarity of word use for each word length. Each output should also be accurate to three decimal places.
4. Calculate the Jaccard similarity between the word pair sets. The output should be accurate to four decimal places. The documents we study here will not have substantial similarity of pairs, but in other cases this is a useful comparison measure.
See the example outputs for details.
## Notes
- An important part of this assignment is to practice with the use of sets. The most complicated instance of this occurs when handling the calculation of the word sets for each word length. This requires you to form a list of sets. The set associated with entry k of the list should be the words of length k.
- Sorting a list or a set of two-tuples of strings is straightforward. (Note that when you sort a set, the result is a list.) The ordering produced is alphabetical by the first element of the tuple and then, for ties, alphabetical by the second. For example,
```python
>>> v = [('elephant', 'kenya'), ('lion', 'kenya'), ('elephant', 'tanzania'), \
('bear', 'russia'), ('bear', 'canada')]
>>> sorted(v)
[('bear', 'canada'), ('bear', 'russia'), ('elephant', 'kenya'), \
('elephant', 'tanzania'), ('lion', 'kenya')]
```
- Submit just a single Python file, `hw6_sol.py`.
- A component missing from our analysis is the frequency with which each word appears. This is easy to keep track of using a dictionary, but we will not do that for this assignment. As you learn about dictionaries think about how they might be used to enhance the analysis we do here.
## Document Files
We have provided the example described above and we will be testing your code along with several other documents (few of them are):
- Elizabeth Alexander's poem Praise Song for the Day.
- Maya Angelou's poem On the Pulse of the Morning.
- A scene from William Shakespeare's Hamlet.
- Dr. Seuss's The Cat in the Hat
- Walt Whitman's When Lilacs Last in the Dooryard Bloom'd (not all of it!)
All of these are available full-text on-line. See poetryfoundation.org and learn about some of the history of these poets, playwrites and authors.
## Supporting Files
{{< link href="HW6.zip" content="HW6.zip" title="Download HW6.zip" download="HW6.zip" card=true >}}
## Solution
### hw6_sol.py
```python
# Debugging
#work_dir = "/mnt/c/Users/james/OneDrive/RPI/Spring 2024/CSCI-1100/Homeworks/HW6/hw6_files/"
work_dir = ""
stop_word = "stop.txt"
def get_stopwords():
stopwords = []
stoptxt = open(work_dir + stop_word, "r")
stop_words = stoptxt.read().split("\n")
stoptxt.close()
stop_words = [x.strip() for x in stop_words if x.strip() != ""]
for i in stop_words:
text = ""
for j in i:
if j.isalpha():
text += j.lower()
if text != "":
stopwords.append(text)
#print("Debug - Stop words:", stopwords)
return set(stopwords)
def parse(raw):
parsed = []
parsing = raw.replace("\n"," ").replace("\t"," ").replace("\r"," ").split(" ")
#print("Debug - Parssing step 1:", parsing)
parsing = [x.strip() for x in parsing if x.strip() != ""]
#print("Debug - Parssing step 2:", parsing)
for i in parsing:
text = ""
for j in i:
if j.isalpha():
text += j.lower()
if text != "":
parsed.append(text)
#print("Debug - Parssing step 3:", parsed)
parsed = [x for x in parsed if x not in get_stopwords()]
#print("Debug - Parssing step 4:", parsed)
return parsed
def get_avg_word_len(file):
#print("Debug - File:", file)
filetxt = open(work_dir + file, "r")
raw = filetxt.read()
filetxt.close()
parsed = parse(raw)
#print("Debug - Parsed:", parsed)
avg = sum([len(x) for x in parsed]) / len(parsed)
#print("Debug - Average:", avg)
return avg
def get_ratio_distinct(file):
filetxt = open(work_dir + file, "r").read()
distinct = list(set(parse(filetxt)))
total = len(parse(filetxt))
ratio = len(distinct) / total
#print("Debug - Distinct:", ratio)
return ratio
def word_length_ranking(file):
filetxt = open(work_dir + file, "r").read()
parsed = parse(filetxt)
max_length = max([len(x) for x in parsed])
#print("Debug - Max length:", max_length)
ranking = [[] for i in range(max_length + 1)]
for i in parsed:
if i not in ranking[len(i)]:
ranking[len(i)].append(i)
#print("Debug - Adding", i, "to", len(i))
for i in range(len(ranking)):
ranking[i] = sorted(ranking[i])
#print("Debug - Ranking:", ranking)
return ranking
def get_word_set_table(file):
str1 = ""
data = word_length_ranking(file)
for i in range(1, len(data)):
cache = ""
if len(data[i]) <= 6:
cache = " ".join(data[i])
else:
cache = " ".join(data[i][:3]) + " ... "
cache += " ".join(data[i][-3:])
if cache != "":
str1 += "{:4d}:{:4d}: {}\n".format(i, len(data[i]), cache)
else:
str1 += "{:4d}:{:4d}:\n".format(i, len(data[i]))
return str1.rstrip()
def get_word_pairs(file, maxsep):
filetxt = open(work_dir + file, "r").read()
parsed = parse(filetxt)
pairs = []
for i in range(len(parsed)):
for j in range(i+1, len(parsed)):
if j - i <= maxsep:
pairs.append((parsed[i], parsed[j]))
return pairs
def get_distinct_pairs(file, maxsep):
total_pairs = get_word_pairs(file, maxsep)
pairs = []
for i in total_pairs:
cache = sorted([i[0], i[1]])
pairs.append((cache[0], cache[1]))
return sorted(list(set(pairs)))
def get_word_pair_table(file, maxsep):
pairs = get_distinct_pairs(file, maxsep)
#print("Debug - Pairs:", pairs)
str1 = " "
str1 += str(len(pairs)) + " distinct pairs" + "\n"
if len(pairs) <= 10:
for i in pairs:
str1 += " {} {}\n".format(i[0], i[1])
else:
for i in pairs[:5]:
str1 += " {} {}\n".format(i[0], i[1])
str1 += " ...\n"
for i in pairs[-5:]:
str1 += " {} {}\n".format(i[0], i[1])
return str1.rstrip()
def get_jaccard_similarity(list1, list2):
setA = set(list1)
setB = set(list2)
intersection = len(setA & setB)
union = len(setA | setB)
if union == 0:
return 0.0
else:
return intersection / union
def get_word_similarity(file1, file2):
file1txt = open(work_dir + file1, "r").read()
file2txt = open(work_dir + file2, "r").read()
parsed1 = parse(file1txt)
parsed2 = parse(file2txt)
return get_jaccard_similarity(parsed1, parsed2)
def get_word_similarity_by_length(file1, file2):
word_by_length_1 = word_length_ranking(file1)
word_by_length_2 = word_length_ranking(file2)
similarity = []
for i in range(1, max(len(word_by_length_1), len(word_by_length_2))):
if i < len(word_by_length_1) and i < len(word_by_length_2):
similarity.append(get_jaccard_similarity(word_by_length_1[i], word_by_length_2[i]))
else:
similarity.append(0.0)
return similarity
def get_word_similarity_by_length_table(file1, file2):
similarity = get_word_similarity_by_length(file1, file2)
str1 = ""
for i in range(len(similarity)):
str1 += "{:4d}: {:.4f}\n".format(i+1, similarity[i])
return str1.rstrip()
def get_word_pairs_similarity(file1, file2, maxsep):
pairs1 = get_distinct_pairs(file1, maxsep)
pairs2 = get_distinct_pairs(file2, maxsep)
return get_jaccard_similarity(pairs1, pairs2)
if __name__ == "__main__":
# Debugging
#file1st = "cat_in_the_hat.txt"
#file2rd = "pulse_morning.txt"
#maxsep = 2
#s = " 01-34 can't 42weather67 puPPy, \r \t and123\n Ch73%allenge 10ho32use,.\n"
#print(parse(s))
#get_avg_word_len(file1st)
#get_ratio_distinct(file1st)
#print(word_length_ranking(file1st)[10])
#print(get_word_set_table(file1st))
# Get user input
file1st = input("Enter the first file to analyze and compare ==> ").strip()
print(file1st)
file2rd = input("Enter the second file to analyze and compare ==> ").strip()
print(file2rd)
maxsep = int(input("Enter the maximum separation between words in a pair ==> ").strip())
print(maxsep)
files = [file1st, file2rd]
for i in files:
print("\nEvaluating document", i)
print("1. Average word length: {:.2f}".format(get_avg_word_len(i)))
print("2. Ratio of distinct words to total words: {:.3f}".format(get_ratio_distinct(i)))
print("3. Word sets for document {}:\n{}".format(i, get_word_set_table(i)))
print("4. Word pairs for document {}\n{}".format(i, get_word_pair_table(i, maxsep)))
print("5. Ratio of distinct word pairs to total: {:.3f}".format(len(get_distinct_pairs(i, maxsep)) / len(get_word_pairs(i, maxsep))))
print("\nSummary comparison")
avg_word_length_ranking = []
for i in files:
length = get_avg_word_len(i)
avg_word_length_ranking.append((i, length))
avg_word_length_ranking = sorted(avg_word_length_ranking, key=lambda x: x[1], reverse=True)
print("1. {} on average uses longer words than {}".format(avg_word_length_ranking[0][0], avg_word_length_ranking[1][0]))
print("2. Overall word use similarity: {:.3f}".format(get_word_similarity(file1st, file2rd)))
print("3. Word use similarity by length:\n{}".format(get_word_similarity_by_length_table(file1st, file2rd)))
print("4. Word pair similarity: {:.4f}".format(get_word_pairs_similarity(file1st, file2rd, maxsep)))
```

Binary file not shown.

View File

@@ -0,0 +1,483 @@
---
title: CSCI 1100 - Homework 7 - Dictionaries
subtitle:
date: 2024-09-12T15:36:47-04:00
slug: csci-1100-hw-7
draft: false
author:
name: James
link: https://www.jamesflare.com
email:
avatar: /site-logo.avif
description: "This blog post outlines a homework assignment worth 100 points, due on March 28, 2024, focusing on Python dictionary manipulation. The assignment includes two parts: an autocorrect program and a movie rating analysis, both requiring careful handling of data files and dictionary operations."
keywords: ["Python", "Dictionaries"]
license:
comment: true
weight: 0
tags:
- CSCI 1100
- Homework
- RPI
- Python
- Programming
categories:
- Programming
collections:
- CSCI 1100
hiddenFromHomePage: false
hiddenFromSearch: false
hiddenFromRss: false
hiddenFromRelated: false
summary: "This blog post outlines a homework assignment worth 100 points, due on March 28, 2024, focusing on Python dictionary manipulation. The assignment includes two parts: an autocorrect program and a movie rating analysis, both requiring careful handling of data files and dictionary operations."
resources:
- name: featured-image
src: featured-image.jpg
- name: featured-image-preview
src: featured-image-preview.jpg
toc: true
math: true
lightgallery: false
password:
message:
repost:
enable: false
url:
# See details front matter: https://fixit.lruihao.cn/documentation/content-management/introduction/#front-matter
---
<!--more-->
## Overview
This homework is worth 100 points and it will be due Thursday, March 28, 2024 at 11:59:59 pm.
It has two parts, each worth 50 points. Please download `hw7_files.zip` and unzip it into the directory for your HW7. You will find multiple data files to be used in both parts.
The goal of this assignment is to work with dictionaries. In part 1, you will do some simple file processing. Read the guidelines very carefully there. In part 2, we have done all the file work for you so you should be able to get the data loaded in just a few lines. For both parts, you will spend most of your time manipulating dictionaries given to you in the various files.
Please remember to name your files `hw7_part1.py` and `hw7_part2.py`.
As always, make sure you follow the program structure guidelines. You will be graded on program correctness as well as good program structure.
Remember as well that we will be continuing to test homeworks for similarity. So, follow our guidelines for the acceptable levels of collaboration. You can download the guidelines from the Course Resources section of Submitty if you need a refresher. Note that this includes using someone elses code from a previous semester. Make sure the code you submit is truly your own.
## Honor Statement
There have been a number of incidents of academic dishonesty on homework assignments and this must change. Cases are easily flagged using automated tools, and verified by the instructors. This results in substantial grade penalties, poor learning, frustration, and a waste of precious time for everyone concerned. In order to mitigate this, the following is a restatement of the course integrity policy in the form of a pledge. By submitting your homework solution files for grading on Submitty, you acknowledge that you understand and have abided by this pledge:
- I have not shown my code to anyone in this class, especially not for the purposes of guiding their own work.
- I have not copied, with or without modification, the code of another student in this class or who took the class in a previous semester.
- I have not used a solution found or purchased on the internet for this assignment.
- The work I am submitting is my own and I have written it myself.
- I understand that if I am found to have broken this pledge that I will receive a 0 on the assignment and an additional 10 point overall grade penalty.
You will be asked to agree to each of these individual statements before you can submit your solutions to this homework.
Please understand that if you are one of the vast majority of the students who follow the rules and only work with other students to understand problem descriptions, Python constructs, and solution approaches you will not have any trouble whatsoever.
## Part 1: Autocorrect
We have all used auto-correct to fix our various typos and mistakes as we write, but have you ever wondered how it works? Here is a small version of autocorrect that looks for a few common typographical errors.
To solve this problem, your program will read the names of three files:
- The first contains a list of valid words and their frequencies,
- The second contains a list of words to autocorrect, and
- The third contains potential letter substitutions (described below).
The input word file has two entries per line; the first entry on the line is a single valid word in the English language and the second entry is a float representing the frequency of the word in the lexicon. The two values are separated by a comma.
Read this English dictionary into a Python dictionary, using words as keys and frequency as values. You will use the frequency for deciding the most likely correction when there are multiple possibilities
The keyboard file has a line for each letter. The first entry on the line is the letter to be replaced and the remaining letters are possible substitutions for that letter. All the letters on the line are separated by spaces. These substitutions are calculated based on adjacency on the keyboard, so if you look down at your keyboard, you will see that the “a” key is surrounded by “q”, “w”, “s”, and “z”. Other substitutions were calculated similarly, so:
```text
b v f g h n
```
means that a possible replacement for `b` is any one of `v f g h n`. Read this keyboard file into a dictionary: the first letter is the key (e.g., b) and the remaining letters are the value, stored as a list.
Your program will then go through every single word in the input file, autocorrect each word and print the correction. To correct a single word, you will consider the following:
- **FOUND**: If the word is in the dictionary, it is correct. There is no need for a change. Print it as found, and go on to the next word.
- Otherwise consider all of the remaining possibilities.
- **DROP**: If the word is not found, consider all possible ways to drop a single letter from the word. Store any valid words (words that are in your English dictionary) in some container (list/set/dictionary). These will be candidate corrections.
- **INSERT**: If the word is not found, consider all possible ways to insert a single letter in the word. Store any valid words in some container (list/set/dictionary). These will be candidate corrections.
- **SWAP**: Consider all possible ways to swap two consecutive letters from the word. Store any valid words in some container (list/set/dictionary). These will be candidate corrections.
- **REPLACE**: Next consider all possible ways to change a single letter in the word with any other letter from the possible replacements in the keyboard file. Store any valid words in some container (list/set/dictionary). These will be candidate corrections.
For example, for the keyboard file we have given you, possible replacements for `b` are `v f g h n`. Hence, if you are replacing `b` in `abar`, you should consider: `avar`, `afar`, `agar`, `ahar`, `anar`.
After going through all of the above, if there are multiple potential matches, sort them by their potential frequency from the English dictionary and return the top 3 values that are in most frequent usage as the most likely corrections in order. If there are three or fewer potential matches, print all of them in order. In the unlikely event that two words are equally likely based on frequency, you should pick the one that comes last in lexicographical order. See the note below.
If there are no potential matches using any of the above corrections, print `NOT FOUND`. Otherwise, print the word (15 spaces), the number of matches, and at most three matches, all on one line.
An example output of your program for the English dictionary we have given you is contained in `part1_output_01.txt`. Note that, we will use a more extensive dictionary on Submitty, so your results may be different on Submitty than they are on your laptop.
When you are sure your homework works properly, submit it to Submitty. Your program must be named `hw7_part1.py` to work correctly.
### Notes:
1. Do NOT write a for loop to search to see if a string (word or letter) is in a dictionary! This will be very slow and may cause Submitty to terminate your program (and you to lose substantial points). Instead, you must use the `in` operator.
2. It is possible, but unlikely, that a candidate replacement word is generated more than once. We recommend that you gather all possible candidate replacements into a set before looking them up in the dictionary.
3. Ordering the potential matches by frequency can be handled easily. For each potential match, create a tuple with the frequencey first, followed by the word. Add this to a list and then sort the list in reverse order. For example, if the list is `v`, then you just need the line of code `v.sort(reverse=True)`
## Part 2: Well rated and not so well rated movies ...
In this section, we are providing you with two data files `movies.json` and `ratings.json` in JSON data format. The first data file is movie information directly from IMDB, including ratings for some movies but not all. The second file contains ratings from Twitter. Be careful: Not all movies in `movies.json` have a rating in `ratings.json`, and not all movies in `ratings.json` have relevant info in `movies.json`.
The data can be read in its entirety with the following five lines of code:
```python
import json
if __name__ == "__main__":
movies = json.loads(open("movies.json").read())
ratings = json.loads(open("ratings.json").read())
```
Both files store data in a dictionary. The first dictionary has movie ids as keys and a second dictionary containing an attribute list for the movie as a value. For example:
```python
print(movies['3520029'])
(movie with id '3520029') produces the output:
{'genre': ['Sci-Fi', 'Action', 'Adventure'], 'movie_year': 2010,
'name': 'TRON: Legacy', 'rating': 6.8, 'numvotes': 254865}
```
This is same as saying:
```python
movies = dict()
movies['3520029'] = {'genre': ['Sci-Fi', 'Action', 'Adventure'],
'movie_year': 2010, 'name': 'TRON: Legacy',
'rating': 6.8, 'numvotes': 254865}
```
If we wanted to get the individual information for each movie, we can use the following commands:
```python
print(movies['3520029']['genre'])
print(movies['3520029']['movie_year'])
print(movies['3520029']['rating'])
print(movies['3520029']['numvotes'])
```
which would provide the output:
```python
['Sci-Fi', 'Action', 'Adventure']
2010
6.8
254865
```
The second dictionary again has movie ids as keys, and a list of ratings as values. For example,
```python
print(ratings['3520029'])
(movie with id '3520029') produces the output:
[6, 7, 7, 7, 8]
```
So, this movie had 5 ratings with the above values.
Now, on to the homework.
### Problem specification
In this homework, assume you are given these two files called `movies.json` and `ratings.json`. Read the data in from these files. Ask the user for a year range: min year and max year, and two weights: `w1` and `w2`. Find all movies in movies made between min and max years (inclusive of both min and max years). For each movie, compute the combined rating for the movie as follows:
```python
(w1 * imdb_rating + w2 * average_twitter_rating) / (w1 + w2)
```
where the `imdb_rating` comes from movies and `average_twitter_rating` is the average rating from ratings.
If a movie is not rated in Twitter, or if the Twitter rating has fewer than 3 entries, skip the movie. Now, repeatedly ask the user for a genre of movie and return the best and worst movies in that genre based on the years given and the rating you calculated. Repeat until the user enters stop.
An example of the program run (how it will look when you run it using Spyder) is provided in file `hw7_part2_output_01.txt` (the second line for each movie has 8 spaces at the start of the line, and the rating is given in `{:.2f}` format).
The movies we are giving you for testing are a subset of the movies we will use during testing on Submitty, so do not be surprised if there are differences when you submit.
When you are sure your homework works properly, submit it to Submitty. Your program must be named `hw7_part2.py` to work correctly.
### General hint on sorting
It is possible that two movies have the same rating. Consider the following code:
```python
>>> example = [(1, "b"), (1, "a"), (2, "b"), (2, "a")]
>>> sorted(example)
[(1, 'a'), (1, 'b'), (2, 'a'), (2, 'b')]
>>> sorted(example, reverse=True)
[(2, 'b'), (2, 'a'), (1, 'b'), (1, 'a')]
```
Note that the sort puts tuples in order based on the index 0 value first, but in the case of ties, the tie is broken by the index 1 tuple. (If there were a tie in both the index 0 and the index 1 tuple, the sort would continue with the index 2 tuple if available and so on.) The same relationship holds when sorting lists of lists.
To determine the worst and best movies, the example code used a sort with the rating in the index 0 spot and with the name of the movie in the index 1 position. Keep this in mind when you are determining the worst and best movies.
## Supporting Files
{{< link href="HW7.zip" content="HW7.zip" title="Download HW7.zip" download="HW7.zip" card=true >}}
## Solution
> [!NOTE]
> I didn't get a full mark in this assignment (Only 96%), so you should not fully trust it. I may redo it to get a full mark solution. After that, I will add it here.
### hw7_part1.py
```python
"""
An implementation of HW7 Part 1
"""
# Global Variables
word_path = ""
#word_path = "/mnt/c/Users/james/OneDrive/RPI/Spring 2024/CSCI-1100/Homeworks/HW7/hw7_files/"
# Debugging Variables
dictionary_file = "words_10percent.txt"
input_file = "input_words.txt"
keyboard_file = "keyboard.txt"
def get_dictionary(file_name):
words_dict = dict()
data = open(file_name, 'r')
for lines in data:
lines = lines.strip()
the_key = lines.split(",")[0]
the_value = float(lines.split(",")[1])
words_dict[the_key] = the_value
data.close()
return words_dict
def get_keyboard(file_name):
keyboard_dict = dict()
data = open(file_name, 'r')
for lines in data:
lines = lines.strip()
the_key = lines.split(" ")[0]
keyboard_dict[the_key] = []
for i in lines.split(" ")[1:]:
keyboard_dict[the_key].append(i)
data.close()
return keyboard_dict
def check_in_dictionary(word, dictionary):
if word in dictionary:
return True
return False
def get_input_words(file_name):
input_words = []
file = open(file_name, 'r')
for lines in file:
lines = lines.strip()
input_words.append(lines)
file.close()
return input_words
def get_drop_words(word):
drop_words = set()
for i in range(len(word)):
drop_words.add(word[:i] + word[i+1:])
return drop_words
def get_insert_words(word):
insert_words = set()
alphabet = "abcdefghijklmnopqrstuvwxyz"
for i in range(len(word)+1):
for j in alphabet:
insert_words.add(word[:i] + j + word[i:])
#print("Inserting: ", word[:i] + j + word[i:])
return insert_words
def get_swap_words(word):
swap_words = set()
for i in range(len(word) - 1):
swap_words.add(word[:i] + word[i+1] + word[i] + word[i+2:])
return swap_words
def get_replace_words(word, keyboard):
replace_words = set()
#print(keyboard)
for i in range(len(word)):
for j in range(len(word[i])):
for k in keyboard[word[i][j]]:
replace_words.add(word[:i] + k + word[i+1:])
return replace_words
def get_all_possible_words(word, keyboard):
all_possible_words = set()
all_possible_words.update(get_drop_words(word))
all_possible_words.update(get_insert_words(word))
all_possible_words.update(get_swap_words(word))
all_possible_words.update(get_replace_words(word, keyboard))
return all_possible_words
def get_suggestions(word, dictionary, keyboard):
suggestions = dict()
all_possible_words = get_all_possible_words(word, keyboard)
for i in all_possible_words:
if i in dictionary:
suggestions[i] = dictionary[i]
topx = sorted(suggestions, key=lambda x: (suggestions[x], x), reverse=True)
#print(topx)
return topx
def construct_output(input_words, dictionary, keyboard):
output = ""
max_length = max([len(i) for i in input_words])
for i in input_words:
output += " " + " " * (max_length - len(i)) + i + " -> "
if check_in_dictionary(i, dictionary):
output += "FOUND"
elif len(get_suggestions(i, dictionary, keyboard)) == 0:
output += "NOT FOUND"
else:
output += "FOUND {:2d}".format(len(get_suggestions(i, dictionary, keyboard))) + ": "
suggestions = get_suggestions(i, dictionary, keyboard)[:3]
for j in suggestions:
output += " " + j
output += "\n"
return output
if __name__ == "__main__":
dictionary_file = input("Dictionary file => ").strip()
print(dictionary_file)
input_file = input("Input file => ").strip()
print(input_file)
keyboard_file = input("Keyboard file => ").strip()
print(keyboard_file)
dictionary = get_dictionary(word_path + dictionary_file)
#print(dictionary)
keyboard = get_keyboard(word_path + keyboard_file)
#print(keyboard)
#print(get_input_words(word_path + input_file))
#print(get_drop_words("hello"))
#print("shut" in get_insert_words("shu"))
#print(get_swap_words("hello"))
#print("integers" in get_replace_words("inteters", keyboard))
#print(get_all_possible_words("hello", keyboard))
#print(get_suggestions("doitd", dictionary, keyboard))
print(construct_output(get_input_words(word_path + input_file), dictionary, keyboard), end = "")
```
### hw7_part2.py
```python
"""
An implementation of HW7 Part 2
"""
import json
# Global Variables
word_path = ""
#word_path = "/mnt/c/Users/james/OneDrive/RPI/Spring 2024/CSCI-1100/Homeworks/HW7/hw7_files/"
genre = ""
# Debugging Variables
#min_year = 2000
#max_year = 2016
#imdb_weight = 0.7
#twitter_weight = 0.3
#genre = "sci-fi"
def get_movie_ids(movies, min_year, max_year):
ids = set()
for i in movies.keys():
if movies[i]['movie_year'] >= min_year and movies[i]['movie_year'] <= max_year:
ids.add(int(i))
return ids
def get_imdb_rating(movies, movie_id):
return float(movies[str(movie_id)]['rating'])
def get_twitter_rating(ratings, movie_id):
if str(movie_id) in ratings.keys():
return ratings[str(movie_id)]
else:
return []
def get_num_twitter_ratings(ratings, movie_id):
return len(get_twitter_rating(ratings, movie_id))
def get_weighted_rating(movies, ratings, movie_id, imdb_weight, twitter_weight):
imdb = get_imdb_rating(movies, movie_id)
twitter = 0.0
for i in get_twitter_rating(ratings, movie_id):
twitter += i
twitter /= len(get_twitter_rating(ratings, movie_id))
return (imdb * imdb_weight + twitter * twitter_weight) / (imdb_weight + twitter_weight)
def get_movie_name(movies, movie_id):
return movies[str(movie_id)]['name']
if __name__ == "__main__":
movies = json.loads(open(word_path + "movies.json").read())
ratings = json.loads(open(word_path + "ratings.json").read())
"""
movies['3520029'] = {'genre': ['Sci-Fi', 'Action', 'Adventure'],
'movie_year': 2010, 'name': 'TRON: Legacy',
'rating': 6.8, 'numvotes': 254865}
"""
min_year = int(input("Min year => ").strip())
print(min_year)
max_year = int(input("Max year => ").strip())
print(max_year)
imdb_weight = float(input("Weight for IMDB => ").strip())
print(imdb_weight)
twitter_weight = float(input("Weight for Twitter => ").strip())
print(twitter_weight)
ids = get_movie_ids(movies, min_year, max_year)
#print(ids)
while genre.lower() !="stop":
genre = input("\nWhat genre do you want to see? ").strip()
print(genre)
if genre == "stop":
break
min_rating = 10000.0
max_rating = 0.0
min_name = ""
max_name = ""
mv_min_year = 10000
mv_max_year = 0
for i in ids:
if get_num_twitter_ratings(ratings, i) <= 3:
continue
genres = movies[str(i)]['genre']
genres = [x.lower() for x in genres]
#print("Debug", i, genres)
if genre.lower() in genres:
rating = get_weighted_rating(movies, ratings, i, imdb_weight, twitter_weight)
#print("Debug", rating)
if rating < min_rating:
min_rating = rating
min_name = get_movie_name(movies, i)
mv_min_year = movies[str(i)]['movie_year']
if rating > max_rating:
max_rating = rating
max_name = get_movie_name(movies, i)
mv_max_year = movies[str(i)]['movie_year']
if min_name == "" or max_name == "":
print("\nNo {} movie found in {} through {}".format(genre, mv_min_year, mv_max_year))
else:
print("\nBest:\n Released in {}, {} has a rating of {:.2f}".format(mv_max_year, max_name, max_rating))
print("\nWorst:\n Released in {}, {} has a rating of {:.2f}".format(mv_min_year, min_name, min_rating))
genre = genre
#genre = "stop" # Debugging Only
```

Binary file not shown.

View File

@@ -0,0 +1,154 @@
---
title: "CSCI 1100 - Homework 8 - Bears, Berries, and Tourists Redux - Classes"
subtitle:
date: 2024-09-13T15:36:47-04:00
slug: csci-1100-hw-8
draft: false
author:
name: James
link: https://www.jamesflare.com
email:
avatar: /site-logo.avif
description: This blog post provides a detailed guide on completing Homework 8 for CSCI 1100, focusing on simulating a berry field with bears and tourists using Python classes. It covers the creation of BerryField, Bear, and Tourist classes, and instructions for submitting the assignment.
keywords: ["Python", "Classes", "Simulation", "Homework 8"]
license:
comment: true
weight: 0
tags:
- CSCI 1100
- Homework
- RPI
- Python
- Programming
categories:
- Programming
collections:
- CSCI 1100
hiddenFromHomePage: false
hiddenFromSearch: false
hiddenFromRss: false
hiddenFromRelated: false
summary: This blog post provides a detailed guide on completing Homework 8 for CSCI 1100, focusing on simulating a berry field with bears and tourists using Python classes. It covers the creation of BerryField, Bear, and Tourist classes, and instructions for submitting the assignment.
resources:
- name: featured-image
src: featured-image.jpg
- name: featured-image-preview
src: featured-image-preview.jpg
toc: true
math: false
lightgallery: false
password:
message:
repost:
enable: false
url:
# See details front matter: https://fixit.lruihao.cn/documentation/content-management/introduction/#front-matter
---
<!--more-->
## Overview
This homework is worth 100 points toward your overall homework grade and is due Thursday, April 18, 2024, at 11:59:59 pm. It has three parts. The first two are not worth many points and may end up being worth 0. They are mainly there to give you information to help you debug your solution. Please download `hw8_files.zip` and unzip it into the directory for your HW8. You will find data files and sample outputs for each of the parts.
The goal of this assignment is to work with classes. You will be asked to write a simulation engine and use classes to encapsulate data and functionality. You will have a lot of design choices to make. While we have done simulations before, this one will be more complex. It is especially important that you start slowly, build a program that works for simple cases, test it, and then add more complexity. We will provide test cases of increasing difficulty. Make sure you develop slowly and test thoroughly.
## Submission Instructions
In this homework, for the first time, you will be submitting multiple files to Submitty that together comprise a single program. Please follow these instructions carefully.
Each of Part 1, Part 2, and Part 3 will require you to write a main program: `hw8_part1.py`, `hw8_part2.py`, and `hw8_part3.py`, respectively. You must also submit three modules per part in addition to this main file, each of which encapsulates a class. The first is a file called `BerryField.py` that contains your BerryField class, a file called `Bear.py` that contains your Bear class, and a file called `Tourist.py` that contains your Tourist class.
As always, make sure you follow the program structure guidelines. You will be graded on good program structure as well as program correctness.
Remember as well that we will be continuing to test homeworks for similarity. So, follow our guidelines for the acceptable levels of collaboration. You can download the guidelines from the resources section in the Course Materials if you need a refresher. We take this very seriously and will not hesitate to impose penalties when warranted.
## Getting Started
You will need to write at least three classes for this assignment corresponding to a BerryField, a Bear, and a Tourist. We are going to give you a lot of freedom in how you organize these three classes, but each class must have at least an initializer and a string method. Additional methods are up to you. Each of the classes is described below.
### BerryField
The `BerryField` class must maintain and manage the location of berries as a square Row X Column grid with (0,0) being the upper left corner and (N-1, N-1) being the lower right corner. Each space holds 0-10 berry units.
- The initializer class must, minimally, be able to take in a grid of values (think of our Sudoku lab) and use it to create a berry field with the values contained in the grid.
- The string function must, minimally, be able to generate a string of the current state of the berry patch. Each block in the grid must be formatted with the `"{:>4}"` format specifier. If there is a bear at the location, the grid should have a `"B"`; if there is a tourist, the grid should have a `"T"`; and if there is both a bear and a tourist, the grid should have an `"X"`. If there is neither a bear nor a tourist, it should have the number of berries at the location.
- Berries grow. The BerryField class must provide a way to grow the berry field. When the berries grow, any location with a value `1 <= number of berries < 10` will gain an extra berry.
- Berries also spread. Any location with no berries that is adjacent to a location with 10 berries will get 1 berry during the grow operation.
### Bear
Each Bear has a location and a direction in which they are walking. Bears are also very hungry. In your program, you must manage 2 lists of bears. The first list contains those bears that are currently walking in the field. The second is a queue of bears waiting to enter the field.
- The initializer class must, minimally, be able to take in a row and column location and a direction of travel.
- The string function must, minimally, be able to print out the location and direction of travel for the bear and if the bear is asleep.
- Bears can walk `North (N)`, `South (S)`, `East (E)`, `West (W)`, `NorthEast (NE)`, `NorthWest (NW)`, `SouthEast (SE)`, or `SouthWest (SW)`. Once a bear starts walking in a direction, it never turns.
- Bears are always hungry. Every turn, unless there is a tourist on the same spot, the bear eats all the berries available on the space and then moves in its current direction to the next space. This continues during the current turn until the bear eats 30 berries or runs into a tourist.
- For the special case of a bear and a tourist being in the same place during a turn, the bear does not eat any berries, but the tourist mysteriously disappears and the bear falls asleep for three turns.
- Once a bear reaches the boundary of the field (its row or column becomes -1 or N), it is no longer walking in the field and need not be considered any longer.
### Tourist
Each Tourist has a location. Just like with bears, you must maintain a list of tourists currently in the field and a queue of tourists waiting to enter the field.
- The initializer class must, minimally, be able to take in a row and column location.
- Tourists see a bear if the bear is within 4 of their current position.
- The string function must, minimally, be able to print out the location of the tourist and how many turns have passed since they have seen a bear.
- Tourists stand and watch. They do not move, but they will leave the field if:
1. Three turns pass without them seeing a bear; they get bored and go home.
2. They can see three bears at the same time; they get scared and go home.
3. A bear runs into them; they mysteriously disappear and can no longer be found in the field.
## Execution
Remember to get `hw8_files_F19.zip` from the Course Materials section of Submitty. It has two sample input files and the expected output for your program.
For this homework, all of the data required to initialize your classes and program can be found in JSON files. Each of your 3 parts should start by asking for the name of the JSON file, reading the file, and then creating the objects you need based on the data read. The code below will help you with this.
```python
f = open("bears_and_berries_1.json")
data = json.loads(f.read())
print(data["berry_field"])
print(data["active_bears"])
print(data["reserve_bears"])
print(data["active_tourists"])
print(data["reserve_tourists"])
```
You will see that the field is a list of lists where each `[row][column]` value is the number of berries at that location; the `"active_bears"` and `"reserve_bears"` entries are lists of three-tuples `(row, column, direction)` defining the bears; and the `"active_tourists"` and `"reserve_tourists"` entries are lists of two-tuples `(row, column)` defining the tourists.
## Part 1
In Part 1, read the JSON file, create your objects, and then simply report on the initial state of the simulation by printing out the berry field, active bears, and active tourists. Name your program `hw8_part1.py` and submit it along with the three classes you developed.
## Part 2
In Part 2, start off the same by reading the JSON file, create your objects, and again print out the initial state of the simulation. Then run five turns of the simulation by:
- Growing the berries
- Moving the bears
- Checking on the tourists
- Printing out the state of the simulation
Do not worry about the reserve bears or reserve tourists entering the field, but report on any tourists or bears that leave. Name your program `hw8_part2.py` and submit it along with the three classes you developed.
## Part 3
In Part 3, do everything you did in Part 2, but make the following changes:
- After checking on the tourists, if there are still bears in the reserve queue and at least 500 berries, add the next reserve bear to the active bears.
- Then, if there are still tourists in the reserve queue and at least 1 active bear, add the next reserve tourist to the field.
- Instead of stopping after 5 turns, run until there are no more bears on the field and no more bears in the reserve list, or if there are no more bears on the field and no more berries.
- Finally, instead of reporting status every turn, report it every 5 turns and then again when the simulation ends.
As you go, report on any tourists or bears that leave or enter the field. Name your program `hw8_part3.py` and submit it along with the three classes you developed.
## Supporting Files
{{< link href="HW8.zip" content="HW8.zip" title="Download HW8.zip" download="HW8.zip" card=true >}}
## Solution
> [!NOTE]
> I didn't get a full mark in this assignment, so I didn't post the solution. I may redo it to get a full mark solution. After that, I will add it here.

View File

@@ -0,0 +1,504 @@
---
title: CSCI 1200 - Homework 1 - Spotify Playlists
subtitle:
date: 2025-02-15T13:38:46-05:00
lastmod: 2025-02-15T13:38:46-05:00
slug: csci-1200-hw-1
draft: false
author:
name: James
link: https://www.jamesflare.com
email:
avatar: /site-logo.avif
description: This blog post provides a detailed guide on developing a music playlist management program similar to Spotify using C++. It covers command-line parameter handling, file I/O operations, and the use of STL string and vector classes.
keywords: ["C++", "Programming", "Homework", "STL Vector","Playlist Management"]
license:
comment: true
weight: 0
tags:
- CSCI 1200
- Homework
- RPI
- C++
- Programming
categories:
- Programming
collections:
- CSCI 1200
hiddenFromHomePage: false
hiddenFromSearch: false
hiddenFromRss: false
hiddenFromRelated: false
summary: This blog post provides a detailed guide on developing a music playlist management program similar to Spotify using C++. It covers command-line parameter handling, file I/O operations, and the use of STL string and vector classes.
resources:
- name: featured-image
src: featured-image.jpg
- name: featured-image-preview
src: featured-image-preview.jpg
toc: true
math: false
lightgallery: false
password:
message:
repost:
enable: false
url:
# See details front matter: https://fixit.lruihao.cn/documentation/content-management/introduction/#front-matter
---
<!--more-->
## Assignment Requirements
{{< details >}}
Before starting this homework, make sure you have read and understood the Academic Integrity Policy.
In this assignment you will develop a program to manage music playlists like Spotify does, let's call this program New York Playlists. Please read the entire handout before starting to code the assignment.
## Learning Objectives
- Practice handling command line arguments.
- Practice handling file input and output.
- Practice the C++ Standard Template Library string and vector classes.
## Command Line Arguments
Your program will be run like this:
```console
./nyplaylists.exe playlist.txt actions.txt output.txt
```
Here:
- nyplaylists.exe is the executable file name.
- playlist.txt is the name of an input file which contains a playlist - in this README, we will refer to this file as the **playlist file**.
- actions.txt is an input file which defines a sequence of actions - in this README, we will refer to this file as the **actions file**.
- output.txt where to print your output to.
## Playlist File Format and Output File Format
The playlist file and the output file have the same format. Take the playlist_tiny1.txt as an example, this file has the following 4 lines:
```console
"Perfect Duet" Ed Sheeran, Beyonce
"Always Remember Us This Way" Lady Gaga current
"Million Reasons" Lady Gaga
"I Will Never Love Again - Film Version" Lady Gaga, Bradley Cooper
```
Except the second line, each line has two fields, the music title, and the artist(s). There is one single space separating these two fields.
The second line is special, it ends with the word **current**, meaning that the song "Always Remember Us This Way" is the currently playing song. This word **current** appears in the **playlist file** once and should also appear in the output file once.
## Actions File Format
The actions file defines actions. Take actions1.txt as an example, this file has the following lines:
```console
add "Umbrella" Rihanna
add "We Are Young" Fun
add "You Are Still the One" Shania Twain
remove "Million Reasons" Lady Gaga
add "Viva La Vida" Coldplay
move "I Will Never Love Again - Film Version" Lady Gaga, Bradley Cooper 1
next
next
next
previous
move "You Are Still the One" Shania Twain 4
```
The **actions file** may include 5 different types of actions:
- add, which adds a song to the end of the playlist.
- remove, which removes a song from the playlist.
- move, which moves a song to a new position - the new position is always included at the end of the line. The line *move "I Will Never Love Again - Film Version" Lady Gaga, Bradley Cooper 1*, moves the song "I Will Never Love Again - Film Version" to position 1, and the line *move "You Are Still the One" Shania Twain 4*, moves the song "You Are Still the One" to position 4. Note that, unliked array indexing in C/C++, positioning in Spotify starts at 1, as opposed to 0. This can be seen in the above Spotify screenshot: the first position is position 1.
- next, which skips the currently playing song and starts playing the song that is listed directly after it. Note that if the currently playing song is already at the bottom of the playlist, the action *next* will make the first song (i.e., the song at the very top of the playlist) as the currently playing song.
- previous, which skips the currently playing song and goes to the song listed directly before the currently playing song. Note that if the currently playing song is already at the top of the playlist, the action *previous* will make the last song (i.e., the song at the bottom of the playlist) as the currently playing song.
According to this sample **actions file**, 4 songs will be added to the playlist, 1 song will be removed, 2 songs will be moved. And the currently playing song will be a different song, instead of the song "Always Remember Us This Way".
When playlist_tiny1.txt and actions1.txt are supplied to your program as the two input files, your program should produce the following output file:
```console
"I Will Never Love Again - Film Version" Lady Gaga, Bradley Cooper
"Perfect Duet" Ed Sheeran, Beyonce
"Always Remember Us This Way" Lady Gaga
"You Are Still the One" Shania Twain
"Umbrella" Rihanna
"We Are Young" Fun current
"Viva La Vida" Coldplay
```
## Non-existent Songs
If a move action or a remove action as defined in the **actions file** attempts to move or remove a song which does not exist in the playlist, your program should ignore such an action.
## Duplicated Songs
In cases where the same song appears more than once on the playlist, choose the first song (to move or remove) - i.e., search the playlist, starting from the top to the bottom, identify the first occurrence of this song, and use it (to move or remove).
## Instructor's Code
You can test (but not view) the instructor's code here: [instructor code](http://ds.cs.rpi.edu/hws/playlists/). Note that this site is hosted on RPI's network and you can visit this site only if you are on RPI's network: either on campus or using a VPN service. Also note that, it is not your job in this assignment to play musics, the instructor's C++ code here is just used as the backend to manage the playlist.
## Program Requirements & Submission Details
In this assignment, you are required to use both std::string and std::vector. You are NOT allowed to use any data structures we have not learned so far.
Use good coding style when you design and implement your program. Organize your program into functions: dont put all the code in main! Be sure to read the [Homework Policies](https://www.cs.rpi.edu/academics/courses/spring25/csci1200/homework_policies.php) as you put the finishing touches on your solution. Be sure to make up new test cases to fully debug your program and dont forget to comment your code! Complete the provided template [README.txt](./README.txt). You must do this assignment on your own, as described in the [Collaboration Policy & Academic Integrity](https://www.cs.rpi.edu/academics/courses/spring25/csci1200/academic_integrity.php) page. If you did discuss the problem or error messages, etc. with anyone, please list their names in your README.txt file. Prepare and submit your assignment as instructed on the course webpage. Please ask a TA if you need help preparing your assignment for submission.
**Due Date**: 01/16/2025, 10pm.
## Rubric
13 pts
- README.txt Completed (3 pts)
- One of name, collaborators, or hours not filled in. (-1)
- Two or more of name, collaborators, or hours not filled in. (-2)
- No reflection. (-1)
- STL Vector & String (3 pts)
- Uses data structures which have not been covered in this class. (-3)
- Did not use STL vector (-2)
- Did not use STL string (-2)
- Program Structure (7 pts)
- No credit (significantly incomplete implementation) (-7)
- Putting almost everything in the main function. It's better to create separate functions for different tasks. (-2)
- Improper uses or omissions of const and reference. (-1)
- Almost total lack of helpful comments. (-4)
- Too few comments. (-2)
- Contains useless comments like commented-out code, terminal commands, or silly notes. (-1)
- Overly cramped, excessive whitespace, or poor indentation. (-1)
- Lacks error checking (num of args, invalid file names, invalid command, etc.) (-1)
- Poor choice of variable names: non-descriptive names (e.g. 'vec', 'str', 'var'), single-letter variable names (except single loop counter), etc. (-2)
- Uses global variables. (-1)
- Overly long lines, in excess of 100 or so characters. It's recommended to keep all lines short and put comments on their own lines. (-1)
{{< /details >}}
## Supporting Files
{{< link href="spotify_playlists.7z" content="spotify_playlists.7z" title="Download spotify_playlists.7z" download="spotify_playlists.7z" card=true >}}
## Program Design
Before start, we need to find out what need to do. Let's draw a flowchart to exam what are the steps.
```mermaid
flowchart TB
A(("Start")) --> D["Read 'playlist file', 'actions file'"]
subgraph "Initialize"
D --> E["Find the index of current song (if any)"]
end
E --> F{"For each action in 'actions file'"}
subgraph "Process Actions"
F -- next --> G["Find the index of current song (if any)"]
G --> H["Remove 'current' from current song"]
H --> I{"Is it the last song?"}
I -- Yes --> J["Set index to 0"]
I -- No --> K["Set index to index+1"]
J --> L["Mark new current song"]
K --> L["Mark new current song"]
F -- previous --> M["Find the index of current song (if any)"]
M --> N["Remove 'current' from current song"]
N --> O{"Is it the first song?"}
O -- Yes --> P["Set index to last song"]
O -- No --> Q["Set index to index-1"]
P --> R["Mark new current song"]
Q --> R["Mark new current song"]
F -- add --> S["'Build' the new song string"]
S --> T["Append to playlist"]
F -- remove --> U["'Build' the song string to remove"]
U --> V["Find the first occurrence (if any)"]
V --> W["Remove from playlist (ignore if not found)"]
F -- move --> X["'Build' the song string to move"]
X --> Y["Check 'move' destination"]
Y --> Z["Find the first occurrence (if any)"]
Z --> ZA["Remove from playlist (ignore if not found)"]
ZA --> ZB["Insert at new position"]
end
```
Then, we can plan what function to use in this program.
```mermaid
flowchart TB
subgraph "Main"
main["main()"]
end
subgraph "File IO"
load_list("load_list()")
get_text("get_text()")
write_list("write_list()")
end
subgraph "Helpers"
is_all_digits("is_all_digits()")
tokenizer("tokenizer()")
check_in_list("check_in_list()")
remove_in_list("remove_in_list()")
get_current("get_current()")
build_song("build_song()")
end
%% Connections
main --> load_list
load_list --> get_text
main --> write_list
main --> is_all_digits
main --> tokenizer
main --> check_in_list
main --> remove_in_list
main --> get_current
main --> build_song
remove_in_list --> check_in_list
```
## Pitfalls
1. It's hard to load each argument correctly. For example, song can include spaces, the singer can also have space or something else in their names. But luckily, we don't need care too much about the middle part. I mean the first argument is always the command. The rest of it is the song information we need to add / delete. I split the arguments / songs into parts by space. `<action> <song> <location>` and take the each part as needed.
2. When I am moving / adding the song. It's possible that the song has a `current` string at the end of line (in the playlist file already). If we only check the song's name, it will not pass some test cases. For example, this is how I handle this case for `move` command.
```diff
if (tokens[0] == "move") {
if (is_all_digits(tokens.back())){
//set target position
int dest = std::stoi(tokens.back());
//build song from tokens
std::string song;
song = build_song(tokens, 1, tokens.size() - 1);
+ //fix song name if it has current tag
+ if (!check_in_list(song, playlist) &&
+ !check_in_list(song + " current", playlist)) {continue;}
+ else if (check_in_list(song + " current", playlist)) {
+ song += " current";
+ }
remove_in_list(song, playlist);
playlist.insert(playlist.begin() + dest - 1, song);
} else {
std::cout << "ERROR: Missing move destination" << std::endl;
continue;
}
}
```
I added another check with the song + `current` in the playlist before I actually add it into the playlist.
## Solution
### nyplaylists.cpp
```cpp
//An implement of CSCI-1200 HW1 Spotify Playlists
//Date: 2025/1/16
//Author: JamesFlare
#include <vector>
#include <string>
#include <iostream>
#include <fstream>
std::string get_text(const std::string &fname) {
//load a text file into a string
std::ifstream inFile(fname);
//check if file exists
if (!inFile) {
std::cout << "Error: File not found" << std::endl;
return "";
}
std::string text;
std::string line;
while (std::getline(inFile, line)) {
text += line;
text += "\n";
}
inFile.close();
return text;
}
std::vector<std::string> load_list(const std::string &fname) {
//load a text file into a vector of strings
std::string text = get_text(fname);
std::vector<std::string> lines;
std::size_t start = 0;
std::size_t end = 0;
while ((end = text.find('\n', start)) != std::string::npos) {
lines.push_back(text.substr(start, end - start));
start = end + 1;
}
if (start < text.size()) {
lines.push_back(text.substr(start));
}
return lines;
}
bool is_all_digits(const std::string& s) {
//check if string is int
for (char c : s) {
if (!std::isdigit(static_cast<unsigned char>(c))) {
return false;
}
}
return !s.empty();
}
std::vector<std::string> tokenizer(const std::string &s) {
//split string into tokens
std::vector<std::string> tokens;
std::string token;
for (char c : s) {
if (c == ' ') {
tokens.push_back(token);
token = "";
} else {
token += c;
}
}
tokens.push_back(token);
return tokens;
}
bool check_in_list (const std::string &s, const std::vector<std::string> &list) {
//check if string is in list
for (std::string item : list) {
if (s == item) {
return true;
}
}
return false;
}
void remove_in_list (const std::string &s, std::vector<std::string> &list) {
//remove string from list
if (!check_in_list(s, list)) {return;}
for (int i = 0; i < list.size(); i++) {
if (list[i] == s) {
list.erase(list.begin() + i);
return;
}
}
}
int get_current (std::vector<std::string> &playlist) {
//return the index of the string has word current at the end
for (int i = 0; i < playlist.size(); i++) {
if (playlist[i].find("current") != std::string::npos) {
return i;
}
}
return -1;
}
std::string build_song (const std::vector<std::string> &tokens, const int &start, const int &end) {
//build string from tokens w/ start and end positions
std::string song;
for (int i = start; i < end; i++) {
song += tokens[i];
if (i != end - 1) {
song += " ";
}
}
return song;
}
void write_list(const std::string &fname, const std::vector<std::string> &list) {
//write list to file
std::ofstream outFile(fname);
for (std::string line : list) {
outFile << line << std::endl;
}
outFile.close();
}
int main(int argc, char *argv[]) {
//take 3 arguments
if (argc < 3) {
std::cout << "Error: Not enough arguments" << std::endl;
return 1;
}
//load arguments
std::string playlist_fname = argv[1];
std::string action_list_fname = argv[2];
std::string output_fname = argv[3];
//load working files
std::vector<std::string> playlist = load_list(playlist_fname);
std::vector<std::string> action_list = load_list(action_list_fname);
//get current playing song id
int current_song_id = get_current(playlist);
//execute actions
for (std::string command : action_list) {
//split command into tokens
std::vector<std::string> tokens = tokenizer(command);
if (tokens[0] == "next") {
current_song_id = get_current(playlist);
//remove "current" tag
playlist[current_song_id].erase(playlist[current_song_id].length() - 8);
if (current_song_id == playlist.size() - 1) {
current_song_id = 0;
} else {
current_song_id++;
}
//update current song
playlist[current_song_id] += " current";
}
if (tokens[0] == "previous") {
current_song_id = get_current(playlist);
//remove "current" tag
playlist[current_song_id].erase(playlist[current_song_id].length() - 8);
if (current_song_id == 0) {
current_song_id = playlist.size() - 1;
} else {
current_song_id--;
}
//update current song
playlist[current_song_id] += " current";
}
if (tokens[0] == "add") {
std::string song;
song = build_song(tokens, 1, tokens.size());
playlist.push_back(song);
}
if (tokens[0] == "remove") {
std::string song;
song = build_song(tokens, 1, tokens.size());
remove_in_list(song, playlist);
}
if (tokens[0] == "move") {
if (is_all_digits(tokens.back())){
//set target position
int dest = std::stoi(tokens.back());
//build song from tokens
std::string song;
song = build_song(tokens, 1, tokens.size() - 1);
//fix song name if it has current tag
if (!check_in_list(song, playlist) &&
!check_in_list(song + " current", playlist)) {continue;}
else if (check_in_list(song + " current", playlist)) {
song += " current";
}
remove_in_list(song, playlist);
playlist.insert(playlist.begin() + dest - 1, song);
} else {
std::cout << "ERROR: Missing move destination" << std::endl;
continue;
}
}
}
//write back file
write_list(output_fname, playlist);
return 0;
}
```

Binary file not shown.

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 64 KiB

File diff suppressed because it is too large Load Diff

Binary file not shown.

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.2 KiB

Binary file not shown.

View File

@@ -1,6 +1,6 @@
---
slug: "excalidraw-full-stack-docker"
title: "Excalidraw Full-Stack Self-Deployment"
title: "Deploying a Full-stack Excalidraw Using Docker"
subtitle: ""
date: 2023-01-13T15:54:36+08:00
lastmod: 2024-03-11T12:39:36-05:00
@@ -23,7 +23,8 @@ tags:
categories:
- Tutorials
- Sharing
collections:
- Docker Compose
hiddenFromHomePage: false
hiddenFromSearch: false
@@ -43,8 +44,8 @@ seo:
images: []
repost:
enable: true
url: ""
enable: false
url:
# See details front matter: https://fixit.lruihao.cn/theme-documentation-content/#front-matter
---

View File

@@ -0,0 +1,113 @@
---
title: Fix the Issue of Flarum Emails Not Being Sent Due to Queue.
subtitle:
date: 2024-10-25T12:30:55-04:00
slug: flarum-queue
draft: false
author:
name: James
link: https://www.jamesflare.com
email:
avatar: /site-logo.avif
description: This blog post addresses an issue with Flarum's email delivery, caused by improper Queue handling. It provides solutions using Docker commands and a Flarum plugin to ensure emails are sent correctly, especially when running Flarum in a Docker container.
keywords:
license:
comment: true
weight: 0
tags:
- Docker
- Flarum
- PHP
- Open Source
categories:
- Tutorials
- Sharing
collections:
- Docker Compose
hiddenFromHomePage: false
hiddenFromSearch: false
hiddenFromRss: false
hiddenFromRelated: false
summary: This blog post addresses an issue with Flarum's email delivery, caused by improper Queue handling. It provides solutions using Docker commands and a Flarum plugin to ensure emails are sent correctly, especially when running Flarum in a Docker container.
resources:
- name: featured-image
src: featured-image.jpg
- name: featured-image-preview
src: featured-image-preview.jpg
toc: true
math: false
lightgallery: false
password:
message:
repost:
enable: false
url:
# See details front matter: https://fixit.lruihao.cn/documentation/content-management/introduction/#front-matter
---
<!--more-->
## Introduction
Recently, while configuring Flarum, I encountered a peculiar issue where users were not receiving emails despite the Email SMTP configuration being correct. This included, but was not limited to, registration activation, password recovery, notifications, etc.
## Cause of the Issue
After searching through the sending logs, I discovered that this problem did not exist a few months ago. Reviewing my recent operations and community feedback, I narrowed the issue down to the Queue. In [Redis sessions, cache & queues](https://discuss.flarum.org/d/21873-redis-sessions-cache-queues), there is mention of the Queue. I overlooked this when initially using Redis.
## Solution
One approach is to execute `php flarum queue:work`, as suggested. However, this command opens an uninterrupted window, and we can use a process guardian to ensure it runs correctly. My Flarum instance runs in a Docker container, which is inconvenient for me. Nevertheless, we can run it first to see if it resolves the email sending issue.
```bash
docker exec flarum /bin/sh -c "cd /flarum/app && php flarum schedule:run"
```
I observed that emails were sent correctly after execution, confirming that the issue was due to the Queue not running properly.
The second method, which I ultimately adopted, involves a small plugin provided by [Database Queue - the simplest queue, even for shared hosting](https://discuss.flarum.org/d/28151-database-queue-the-simplest-queue-even-for-shared-hosting). This plugin uses Cron tasks to handle the Queue, requiring only that Cron runs normally.
To install the plugin, since I am in a Docker container, I reconstructed the command:
```bash
docker exec flarum /bin/sh -c "cd /flarum/app && composer require blomstra/database-queue:*"
```
`flarum` is the name of my container; you can modify it accordingly.
Then, restart Flarum and check if Cron has been correctly added. You should see something similar to:
```bash
root@debain:~# docker exec flarum /bin/sh -c "cd /flarum/app && php flarum schedule:list"
+-------------------------------------------------------+-----------+---------------------------------------------------------------------------------------------------------------------+----------------------------+
| Command | Interval | Description | Next Due |
+-------------------------------------------------------+-----------+---------------------------------------------------------------------------------------------------------------------+----------------------------+
| '/usr/bin/php8' 'flarum' drafts:publish | * * * * * | Publish all scheduled drafts. | 2024-10-25 17:00:00 +00:00 |
| '/usr/bin/php8' 'flarum' fof:best-answer:notify | 0 * * * * | After a configurable number of days, notifies OP of discussions with no post selected as best answer to select one. | 2024-10-25 17:00:00 +00:00 |
| '/usr/bin/php8' 'flarum' queue:work --stop-when-empty | * * * * * | | 2024-10-25 17:00:00 +00:00 |
+-------------------------------------------------------+-----------+---------------------------------------------------------------------------------------------------------------------+----------------------------+
```
`'/usr/bin/php8' 'flarum' queue:work --stop-when-empty` is what we expect, indicating no issues.
Remember to add Cron if you haven't already. You can refer to my example. First, enter crontab:
```bash
crontab -e
```
Add:
```bash
* * * * * /usr/bin/docker exec flarum /bin/sh -c "cd /flarum/app && php flarum schedule:run" >> /dev/null 2>&1
```
## Conclusion
Barring any unforeseen circumstances, you should have resolved the issue of emails not being sent. If emails still fail to send, it may be a configuration issue. Ensure the SMTP information is correct before starting and test it.
## References
- [Redis sessions, cache & queues](https://discuss.flarum.org/d/21873-redis-sessions-cache-queues)
- [Database Queue - the simplest queue, even for shared hosting](https://discuss.flarum.org/d/28151-database-queue-the-simplest-queue-even-for-shared-hosting)

View File

@@ -0,0 +1,371 @@
---
title: Use Docker Compose to Deploy the LobeChat Server Database Version
subtitle:
date: 2024-09-15T04:52:21-04:00
slug: install-lobechat-db
draft: false
author:
name: James
link: https://www.jamesflare.com
email:
avatar: /site-logo.avif
description: This blog post offers a comprehensive guide on setting up LobeChat DB version, including configuring Logto for authentication, MinIO for S3 storage, and PostgreSQL for the database. It also covers customizing Logto's sign-in experience and enabling various models for LobeChat.
keywords: ["LobeChat", "Logto", "MinIO", "PostgreSQL", "Docker", "S3 Storage", "Authentication", "Database Configuration"]
license:
comment: true
weight: 0
tags:
- Open Source
- LobeChat
- Docker
categories:
- Tutorials
- Sharing
collections:
- Docker Compose
hiddenFromHomePage: false
hiddenFromSearch: false
hiddenFromRss: false
hiddenFromRelated: false
summary: This blog post offers a comprehensive guide on setting up LobeChat DB version, including configuring Logto for authentication, MinIO for S3 storage, and PostgreSQL for the database. It also covers customizing Logto's sign-in experience and enabling various models for LobeChat.
resources:
- name: featured-image
src: featured-image.jpg
- name: featured-image-preview
src: featured-image-preview.jpg
toc: true
math: false
lightgallery: false
password:
message:
repost:
enable: false
url:
# See details front matter: https://fixit.lruihao.cn/documentation/content-management/introduction/#front-matter
---
<!--more-->
## Introduction
By default, LobeChat uses IndexedDB to store user data, meaning the data is stored locally in the browser. Consequently, it becomes impossible to synchronize across multiple devices and poses a risk of data loss. Meanwhile, there is a server database version of LobeChat that addresses these issues and also allows for knowledge base functionality.
However, configuring the LobeChat DB version isn't straightforward. It involves several parts: setting up the database, configuring authentication services, and configuring S3 storage[^1].
[^1]: See official documentation https://lobehub.com/en/docs/self-hosting/server-database
## Configuring Logto
I recommend deploying the Logto service separately to potentially use it in other projects and manage them independently.
First, create a directory and enter it:
```bash
mkdir logto
cd logto
```
Here is my `docker-compose.yaml` file for reference. Modify the relevant parts according to your own setup.
```yaml
services:
postgresql:
image: postgres:16
container_name: logto-postgres
volumes:
- './data:/var/lib/postgresql/data'
environment:
- 'POSTGRES_DB=logto'
- 'POSTGRES_PASSWORD=logto'
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U postgres']
interval: 5s
timeout: 5s
retries: 5
restart: always
logto:
image: svhd/logto:latest
container_name: logto
ports:
- '127.0.0.1:3034:3034'
- '127.0.0.1:3035:3035'
depends_on:
postgresql:
condition: service_healthy
environment:
- 'PORT=3034'
- 'ADMIN_PORT=3035'
- 'TRUST_PROXY_HEADER=1'
- 'DB_URL=postgresql://postgres:logto@postgresql:5432/logto'
- 'ENDPOINT=https://logto.example.com'
- 'ADMIN_ENDPOINT=https://logto-admin.example.com'
entrypoint: ['sh', '-c', 'npm run cli db seed -- --swe && npm start']
```
After modifying, write the `docker-compose.yaml` file. Then start the container:
```bash
docker compose up -d
```
> [!WARNING]
> Don't forget to set `X-Forwarded-Proto` header in Nginx!
this proxy must support HTTPS because all of Logto's APIs must be run in a secure environment, otherwise errors will occur[^2]. Additionally, just having HTTPS isn't enough; you also need to set the `X-Forwarded-Proto` header value to `https` to inform Logto that users are accessing it via HTTPS. I use Nginx as my reverse proxy service and provide a reference configuration below (modify according to your situation).
[^2]: Discussion on errors https://github.com/logto-io/logto/issues/4279
```nginx
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://127.0.0.1:3034;
proxy_redirect off;
}
```
If you are manually configuring your Nginx configuration file rather than using a graphical tool like Nginx Proxy Manager, you need to complete other parts on your own (do not directly copy). In other words, if you use Nginx Proxy Manager, you can modify the `proxy_pass` and then directly input it into the Advanced settings of the corresponding reverse proxy.
Afterwards, you can access the ADMIN_ENDPOINT to complete registration and configuration (the first registered account will automatically become an admin), remember to add an Application (prepare for LobeChat DB version installation), with a type selected as Next.js (App Router). Several key parameters should not be written incorrectly (replace domain names with your own LobeChat DB instance):
- `Redirect URIs` write `https://lobe.example.com/api/auth/callback/logto`
- `Post sign-out redirect URIs` write `https://lobe.example.com/`
- `CORS allowed origins` write `https://lobe.example.com`
There are three parameters that we will use when configuring the LobeChat DB version: Issuer endpoint, App ID, and App secrets (add one). Note them down.
You can also visit the `/demo-app` path of your user ENDPOINT to test login and registration functions. If everything is fine, then Logto should be properly configured, allowing you to proceed with further steps.
## Configuring MinIO
I recommend deploying MinIO separately for potential use in other projects as well.
Create a directory and enter it:
```bash
mkdir minio
cd minio
```
Here is my `docker-compose.yaml` file for reference:
```yaml
services:
minio:
image: quay.io/minio/minio
container_name: minio
restart: unless-stopped
environment:
- MINIO_DOMAIN=minio.example.com
- MINIO_SERVER_URL=https://minio.example.com
- MINIO_BROWSER_REDIRECT_URL=https://console.minio.example.com
- MINIO_ROOT_USER=xxxx #change it
- MINIO_ROOT_PASSWORD=xxxxx #change it
ports:
- "9000:9000"
- "9090:9090"
volumes:
- ./data:/data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
command: server /data --console-address ":9090"
```
After modifying, write the `docker-compose.yaml` file. Then start the container:
```bash
docker compose up -d
```
Subsequently, log into your MinIO instance from your MINIO_BROWSER_REDIRECT_URL, create a Bucket (e.g., name it as `lobe`; if you change this, remember to modify corresponding configuration files), and configure an Access Policy similar to the following JSON file:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"*"
]
},
"Action": [
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::lobe"
]
},
{
"Effect": "Allow",
"Principal": {
"AWS": [
"*"
]
},
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::lobe"
],
"Condition": {
"StringEquals": {
"s3:prefix": [
"files/*"
]
}
}
},
{
"Effect": "Allow",
"Principal": {
"AWS": [
"*"
]
},
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::lobe/files/**"
]
}
]
}
```
Then go to Access Keys and create a token, save these values as they will be used in the LobeChat DB version configuration.
## Configuring LobeChat DB Version
Now we start configuring the LobeChat DB version. First, create a directory and enter it:
```bash
mkdir lobe-db
cd lobe-db
```
Here is my `docker-compose.yaml` file for reference; remember to modify according to your setup:
```yaml
services:
postgresql:
image: pgvector/pgvector:pg16
container_name: lobe-postgres
volumes:
- './data:/var/lib/postgresql/data'
environment:
- 'POSTGRES_DB=lobe-db'
- 'POSTGRES_PASSWORD=lobe-db'
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U postgres']
interval: 5s
timeout: 5s
retries: 5
restart: always
lobe:
image: lobehub/lobe-chat-database
container_name: lobe-database
ports:
- 127.0.0.1:3033:3210
depends_on:
postgresql:
condition: service_healthy
environment:
- 'APP_URL=https://lobe-db.example.com'
- 'NEXT_AUTH_SSO_PROVIDERS=logto'
- 'KEY_VAULTS_SECRET=NIdSgLKmeFhWmTuQKQYzn99oYk64aY0JTSssZuiWR8A=' #generate using `openssl rand -base64 32`
- 'NEXT_AUTH_SECRET=+IHNVxT2qZpA8J+vnvuwA5Daqz4UFFJOahK6z/GsNIo=' #generate using `openssl rand -base64 32`
- 'NEXTAUTH_URL=https://lobe.example.com/api/auth'
- 'LOGTO_ISSUER=https://logto.example.com/oidc' #Issuer endpoint
- 'LOGTO_CLIENT_ID=xxxx' #App ID
- 'LOGTO_CLIENT_SECRET=xxxx' #App secrets
- 'DATABASE_URL=postgresql://postgres:lobe-db@postgresql:5432/lobe-db'
- 'POSTGRES_PASSWORD=lobe-db'
- 'LOBE_DB_NAME=lobe-db'
- 'S3_ENDPOINT=https://minio.example.com'
- 'S3_BUCKET=lobe'
- 'S3_PUBLIC_DOMAIN=https://minio.example.com'
- 'S3_ACCESS_KEY_ID=xxxxx'
- 'S3_SECRET_ACCESS_KEY=xxxxxx'
- 'S3_ENABLE_PATH_STYLE=1'
- 'OPENAI_API_KEY=sk-xxxxxx' #your OpenAI API Key
- 'OPENAI_PROXY_URL=https://api.openai.com/v1'
- 'OPENAI_MODEL_LIST=-all,+gpt-4o,+gpt-4o-mini,+claude-3-5-sonnet-20240620,+deepseek-chat,+o1-preview,+o1-mini' #change on your own needs, see https://lobehub.com/zh/docs/self-hosting/environment-variables/model-provider#openai-model-list
restart: always
```
For security reasons, `KEY_VAULTS_SECRET` and `NEXT_AUTH_SECRET` should be a random 32-character string. You can generate it using the command `openssl rand -base64 32`.
Then modify domain names in environment variables to your own setup. Additionally, several Logto values need to be set:
- `Issuer endpoint` corresponds to `LOGTO_ISSUER`
- `App ID` corresponds to `LOGTO_CLIENT_ID`
- `App secrets` corresponds to `LOGTO_CLIENT_SECRET`
These can all be found on the Application page you created.
For S3 configuration, also modify accordingly (e.g., `S3_ENDPOINT`, `S3_BUCKET`, `S3_PUBLIC_DOMAIN`, `S3_ACCESS_KEY_ID`, `S3_SECRET_ACCESS_KEY`). As for `S3_ENABLE_PATH_STYLE`, it is usually set to `1`. If your S3 provider uses virtual-host style, change this value to `0`.
{{< admonition type=question title="What are the differences between path-style and virtual-host?" open=true >}}
Path-style and virtual-host are different ways of accessing buckets and objects in S3. The URL structure and domain name resolution differ:
Assuming your S3 provider's domain is s3.example.net, bucket is mybucket, object is config.env, the specific differences are as follows:
- Path-style: `s3.example.net/mybucket/config.env`
- Virtual-host: `mybucket.s3.example.net/config.env`
{{< /admonition >}}
Finally, configure your API-related content (optional). My configuration example uses OpenAI. If you do not set it up on the server side, users will need to enter their own keys in the frontend.
After modifying, write the `docker-compose.yaml` file and start the container:
```bash
docker compose up -d
```
In theory, you can now access LobeChat DB version. Before deploying to production, carefully check for any security issues. If you have questions, feel free to comment.
## Additional Content
### Customizing Logto Login/Registration Options
On the Logto management page, there is a Sign-in experience section with various customization options such as enabling or disabling registration and using social media SSO. By default, the Sign-up identifier is Username; I recommend configuring SMTP in Connectors to change it to Email address so users can recover their passwords via email.
### Enabling Dark Mode for Logto Login/Registration Pages
On the Logto management page, under Sign-in experience, check Enable dark mode to turn on dark mode.
### Adding GitHub Login/Registration Options in Logto
In the Logto management page, go to Connectors and add GitHub under Social connectors. Other options are similar.
### Configuring Additional Models
LobeChat supports many models; you can set different environment variables to enable them. See the official documentation for `OPENAI_MODEL_LIST` configuration options and explanations [here](https://lobehub.com/en/docs/self-hosting/environment-variables/model-provider). There are also other model provider options like DeepSeek.
You can retrieve Model List via API on the frontend to select needed models.
## References
- [Deploying Server-Side Database for LobeChat](https://lobehub.com/en/docs/self-hosting/server-database)
- [bug: use docker deploy logto v1.6 will always redirect to /unknown-session #4279](https://github.com/logto-io/logto/issues/4279)
- [Deployment | Logto docs #reverse-proxy](https://docs.logto.io/docs/recipes/deployment/#reverse-proxy)
- [Deploying LobeChat Server Database with Docker Compose](https://lobehub.com/en/docs/self-hosting/server-database/docker-compose)
- [LobeChat Model Service Providers - Environment Variables and Configuration #openai-model-list](https://lobehub.com/en/docs/self-hosting/environment-variables/model-provider#openai-model-list)

View File

@@ -1,5 +1,5 @@
---
title: Migrate Umami Docker From One Server to Another
title: Migrate the Docker-deployed Umami From One Server to Another.
subtitle:
date: 2024-03-11T18:03:39-04:00
slug: umami-docker-migration
@@ -15,12 +15,14 @@ license:
comment: true
weight: 0
tags:
- PostgreSQL
- Open Source
- Docker
- Umami
- PostgreSQL
- Open Source
- Docker
- Umami
categories:
- Tutorials
- Tutorials
collections:
- Docker Compose
hiddenFromHomePage: false
hiddenFromSearch: false
hiddenFromRss: false
@@ -37,7 +39,7 @@ lightgallery: false
password:
message:
repost:
enable: true
enable: false
url:
# See details front matter: https://fixit.lruihao.cn/documentation/content-management/introduction/#front-matter

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 504 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 541 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 876 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 674 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

File diff suppressed because it is too large Load Diff

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 368 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 287 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 400 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 478 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 532 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 114 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.4 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 330 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 542 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Some files were not shown because too many files have changed in this diff Show More