Skip to content
Snippets Groups Projects
Commit 5705ddbe authored by Sibidharan's avatar Sibidharan :speech_balloon:
Browse files

Initial commit

parents
No related branches found
No related tags found
No related merge requests found
.DS_Store 0 → 100644
File added
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and Webstorm
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
# User-specific stuff:
.idea/**/workspace.xml
.idea/**/tasks.xml
.idea/dictionaries
# Sensitive or high-churn files:
.idea/**/dataSources/
.idea/**/dataSources.ids
.idea/**/dataSources.xml
.idea/**/dataSources.local.xml
.idea/**/sqlDataSources.xml
.idea/**/dynamic.xml
.idea/**/uiDesigner.xml
# Gradle:
.idea/**/gradle.xml
.idea/**/libraries
# CMake
cmake-build-debug/
# Mongo Explorer plugin:
.idea/**/mongoSettings.xml
## File-based project format:
*.iws
## Plugin-specific files:
# IntelliJ
/out/
# mpeltonen/sbt-idea plugin
.idea_modules/
# JIRA plugin
atlassian-ide-plugin.xml
# Cursive Clojure plugin
.idea/replstate.xml
# Crashlytics plugin (for Android Studio and IntelliJ)
com_crashlytics_export_strings.xml
crashlytics.properties
crashlytics-build.properties
fabric.properties
# Pycache
__pycache__/
<?xml version="1.0" encoding="UTF-8"?>
<module type="PYTHON_MODULE" version="4">
<component name="NewModuleRootManager">
<content url="file://$MODULE_DIR$" />
<orderEntry type="jdk" jdkName="Python 3.7" jdkType="Python SDK" />
<orderEntry type="sourceFolder" forTests="false" />
<orderEntry type="module" module-name="handwritten" />
</component>
</module>
\ No newline at end of file
<component name="InspectionProjectProfileManager">
<settings>
<option name="USE_PROJECT_PROFILE" value="false" />
<version value="1.0" />
</settings>
</component>
\ No newline at end of file
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="JavaScriptSettings">
<option name="languageLevel" value="ES6" />
</component>
<component name="ProjectRootManager" version="2" project-jdk-name="Python 3.7" project-jdk-type="Python SDK" />
</project>
\ No newline at end of file
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="ProjectModuleManager">
<modules>
<module fileurl="file://$PROJECT_DIR$/.idea/handwriting-generation.iml" filepath="$PROJECT_DIR$/.idea/handwriting-generation.iml" />
<module fileurl="file://$PROJECT_DIR$/../handwritten/.idea/handwritten.iml" filepath="$PROJECT_DIR$/../handwritten/.idea/handwritten.iml" />
</modules>
</component>
</project>
\ No newline at end of file
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="VcsDirectoryMappings">
<mapping directory="" vcs="Git" />
</component>
</project>
\ No newline at end of file
File added
Problem Definition: Write an 8051 Program to -
Transfer the block from internal memory location 30h to internal memory location 40h.
Aim: To understand concept of internal memory organization, various addressing modes and external memory accessing.
Hardware and Software requirements: IBM PC, Keil software
Algorithm:
1. Start
2. Load count value in register.
3. Point Ro to internal memory location 30h
4. Point R1 to internal memory location 40h
5. Copy contents of memory pointed by R0 to Accumulator
6. Copy contents of Accumulator to memory pointed by R1
8. Increment R0
9. Increment R1
10. Decrement counter
11. If counter is not zero go to step 3
12. Stop.
Conclusion:
We have studied to access internal memory using indirect addressing mode.
# Text to handwriting !!!
### Get given text written on a ruled page automatically
**Its time for your laptop to write assignments for you!** \
**Click to see [Example input](https://github.com/pnshiralkar/text-to-handwriting/blob/master/Example/input.txt) and [Example output](https://github.com/pnshiralkar/text-to-handwriting/blob/master/Example/handwritten.pdf) .**
Implementation of handwriting generation with use of recurrent neural networks in tensorflow. Based on Alex Graves paper (https://arxiv.org/abs/1308.0850). \
This project uses pretrained model and some implementation based on the paper from [this](https://github.com/theSage21/handwriting-generation) repo.
## Install and Use
* Download zip or clone this repo and cd into the repo folder
* Install dependencies : `pip install -r requirements.txt` OR `pip3 install -r requirements.txt`
* **Run and Use :**
* `python handwrite.py --text "Some text with minimum 50 characters" <optional arguments>`
* `python handwrite.py --text-file /path/to/input/text.file <optional arguments>`
* Optional Arguments :
* `--style` : Style of handwriting (0 to 7, defaults to 0)
* `--bias` : Bias in handwriting. More bias is more unclear handwriting (0.00 to 1.00 , defaults to 0.9)
* `--color` : Color of handwriting in RGB format ,defaults to 0,0,150 (ballpen blue)
* `--output` : Path to output pdf file (E.g. ~/assignments/ads1.pdf), defaults to ./handwritten.pdf
* For more information on usage, run `python handwrite.py -h`
### Works the best with multiple pages and long text!
## Additional Information :
* **Additional Outputs:** The pages folder stores the handwritten pages in .jpg and .png (transparent bg) format
* **Modification:** To modify, see generate.py file
* **Train model:** To modify, see train.py file (Refer [this](https://github.com/theSage21/handwriting-generation) repo for more)
More Info
---------
[The paper](http://arxiv.org/abs/1308.0850)
[The man behind it all. Alex Graves](http://www.cs.toronto.edu/~graves/)
[What I am using](https://github.com/theSage21/handwriting-generation)
blank_page.jpg

191 KiB

File added
File added
import argparse
import os
import pickle
from collections import namedtuple
from io import BytesIO
import matplotlib
import numpy as np
# import tensorflow as tf
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
matplotlib.use('agg')
import matplotlib.pyplot as plt
parser = argparse.ArgumentParser()
parser.add_argument('--model', dest='model_path', type=str, default=os.path.join('pretrained', 'model-29'),
help='(optional) DL model to use')
parser.add_argument('--text', dest='text', type=str, help='Text to write')
parser.add_argument('--text-file', dest='file', type=str, default=None, help='Path to the input text file')
parser.add_argument('--style', dest='style', type=int, default=0, help='Style of handwriting (1 to 7)')
parser.add_argument('--bias', dest='bias', type=float, default=0.9,
help='Bias in handwriting. More bias is more unclear handwriting (0.00 to 1.00)')
parser.add_argument('--force', dest='force', action='store_true', default=False)
parser.add_argument('--color', dest='color_text', type=str, default='0,0,150',
help='Color of handwriting in RGB format')
parser.add_argument('--output', dest='output', type=str, default='./handwritten.pdf',
help='Output PDF file path and name')
args = parser.parse_args()
def sample(e, mu1, mu2, std1, std2, rho):
cov = np.array([[std1 * std1, std1 * std2 * rho],
[std1 * std2 * rho, std2 * std2]])
mean = np.array([mu1, mu2])
x, y = np.random.multivariate_normal(mean, cov)
end = np.random.binomial(1, e)
# print(np.array([x, y, end]))
return np.array([x, y, end])
def split_strokes(points):
points = np.array(points)
strokes = []
b = 0
for e in range(len(points)):
if points[e, 2] == 1.:
strokes += [points[b:
e + 1, :2].copy()]
b = e + 1
return strokes
def cumsum(points):
sums = np.cumsum(points[:, :2], axis=0)
return np.concatenate([sums, points[:, 2:]], axis=1)
def sample_text(sess, args_text, translation, bias, style=None):
fields = ['coordinates', 'sequence', 'bias', 'e', 'pi', 'mu1', 'mu2', 'std1', 'std2',
'rho', 'window', 'kappa', 'phi', 'finish', 'zero_states']
vs = namedtuple('Params', fields)(
*[tf.get_collection(name)[0] for name in fields]
)
text = np.array([translation.get(c, 0) for c in args_text])
coord = np.array([0., 0., 1.])
coords = [coord]
# Prime the model with the author style if requested
prime_len, style_len = 0, 0
if style is not None:
# Priming consist of joining to a real pen-position and character sequences the synthetic sequence to generate
# and set the synthetic pen-position to a null vector (the positions are sampled from the MDN)
style_coords, style_text = style
prime_len = len(style_coords)
style_len = len(style_text)
prime_coords = list(style_coords)
coord = prime_coords[0] # Set the first pen stroke as the first element to process
text = np.r_[style_text, text] # concatenate on 1 axis the prime text + synthesis character sequence
sequence_prime = np.eye(len(translation), dtype=np.float32)[style_text]
sequence_prime = np.expand_dims(np.concatenate([sequence_prime, np.zeros((1, len(translation)))]), axis=0)
sequence = np.eye(len(translation), dtype=np.float32)[text]
sequence = np.expand_dims(np.concatenate([sequence, np.zeros((1, len(translation)))]), axis=0)
phi_data, window_data, kappa_data, stroke_data = [], [], [], []
sess.run(vs.zero_states)
sequence_len = len(args_text) + style_len
for s in range(1, 60 * sequence_len + 1):
is_priming = s < prime_len
# print('\r[{:5d}] sampling... {}'.format(s, 'priming' if is_priming else 'synthesis'), end='')
e, pi, mu1, mu2, std1, std2, rho, \
finish, phi, window, kappa = sess.run([vs.e, vs.pi, vs.mu1, vs.mu2,
vs.std1, vs.std2, vs.rho, vs.finish,
vs.phi, vs.window, vs.kappa],
feed_dict={
vs.coordinates: coord[None, None, ...],
vs.sequence: sequence_prime if is_priming else sequence,
vs.bias: bias
})
if is_priming:
# Use the real coordinate if priming
coord = prime_coords[s]
else:
# Synthesis mode
phi_data += [phi[0, :]]
window_data += [window[0, :]]
kappa_data += [kappa[0, :]]
# ---
g = np.random.choice(np.arange(pi.shape[1]), p=pi[0])
coord = sample(e[0, 0], mu1[0, g], mu2[0, g],
std1[0, g], std2[0, g], rho[0, g])
coords += [coord]
stroke_data += [[mu1[0, g], mu2[0, g], std1[0, g], std2[0, g], rho[0, g], coord[2]]]
if not args.force and finish[0, 0] > 0.8:
# print('\nFinished sampling!\n')
break
coords = np.array(coords)
coords[-1, 2] = 1.
return phi_data, window_data, kappa_data, stroke_data, coords
from PIL import Image
def add_color(color, image_out):
print("Applying color : ", color)
img = Image.open(image_out)
width, height = img.size
for x in range(width):
for y in range(height):
old_color = list(img.getpixel((x, y)))
if old_color != [255, 255, 255, 255]:
new_color = [color[x]
for x in range(3)]
img.putpixel((x, y), tuple(new_color))
else:
new_color = [255, 255, 255, 0]
img.putpixel((x, y), tuple(new_color))
imgout = BytesIO()
img.save(imgout, 'PNG')
imgout.seek(0)
return imgout
##################################################################
# The Generator Function #
##################################################################
def generate(args_text, args, sess, translation, text_color=[0, 0, 0]):
style = None
if args.style is not None:
style = None
with open(os.path.join('data', 'styles.pkl'), 'rb') as file:
styles = pickle.load(file)
if args.style > len(styles[0]):
raise ValueError('Requested style is not in style list')
style = [styles[0][args.style], styles[1][args.style]]
currentX = 0
currentY = 0
currentLen = 0
line_length = 50
line_height = -4
num_lines = len(args_text) // 50
text_remaining = len(args_text)
lines_per_page = 28
curr_page = 1
cuur_line = 1
fig, ax = plt.subplots(1, 1)
plt.figure(num=None, figsize=(115, 5 * min(lines_per_page, text_remaining // line_length + args_text.count('\n'))),
dpi=35,
facecolor='w', edgecolor='k')
print('Writing...')
for multiline_text in args_text.split(' '):
for text_without_spaces in multiline_text.split('\n'):
text = " {} ".format(text_without_spaces)
phi_data, window_data, kappa_data, stroke_data, coords = sample_text(sess, text, translation, args.bias,
style)
if currentLen + len(text_without_spaces) > line_length or multiline_text.split('\n').index(
text_without_spaces) > 0:
# print(currentLen)
currentY += line_height
currentX = 0
currentLen = 0
print('')
cuur_line += 1
strokes = np.array(stroke_data)
epsilon = 1e-8
strokes[:, :2] = np.cumsum(strokes[:, :2], axis=0)
minx, maxx = np.min(strokes[:, 0]), np.max(strokes[:, 0])
miny, maxy = np.min(strokes[:, 1]), np.max(strokes[:, 1])
for stroke in split_strokes(cumsum(np.array(coords))):
if np.min(stroke[:, 0]) > maxx - 2 and np.max(stroke[:, 0]) < maxx + 2:
continue
plt.plot(stroke[:, 0] + currentX, -stroke[:, 1] + currentY)
currentX += maxx - 2
currentLen += len(text_without_spaces) + 1
text_remaining -= (len(text_without_spaces) + 1)
print(text, end=' ', flush=True)
if cuur_line >= lines_per_page:
ax.set_aspect('equal')
plt.axis('off')
figfile = BytesIO()
print("\n\nProcessing page No. {}...\nCreating image...".format(curr_page), flush=True)
plt.savefig(figfile, format='png', bbox_inches='tight')
figfile.seek(0) # rewind to beginning of file
print("Colouring text...", flush=True)
figfile1 = add_color(text_color, figfile)
print("Saving image...", flush=True)
image_out = 'pages/page{}.png'.format(curr_page)
with open(image_out, 'wb') as fl:
for x in figfile1:
fl.write(x)
from PIL import Image
img = Image.open(image_out)
img.load()
img = img.resize((int(img.size[0] * 0.8), int(img.size[1] * 0.804)), Image.ANTIALIAS)
# background = Image.new("RGB", img.size, (255, 255, 255))
background = Image.open('blank_page.jpg')
background.load()
background.paste(img, mask=img.split()[3], box=(30, 220)) # 3 is the alpha channel
background.save(image_out.replace('.png', '.jpg'), 'JPEG', quality=100)
print("\nPage No. {} done!\n\n".format(curr_page), flush=True)
fig, ax = plt.subplots(1, 1)
plt.figure(num=None, figsize=(115, 5 * min(lines_per_page, text_remaining // line_length + args_text[
args_text.index(
text_without_spaces):].count(
'\n'))), dpi=40, facecolor='w',
edgecolor='k')
curr_page += 1
currentX = 0
currentY = 0
currentLen = 0
cuur_line = 1
ax.set_aspect('equal')
plt.axis('off')
figfile = BytesIO()
print("\n\nProcessing page No. {}...\nCreating image...".format(curr_page), flush=True)
plt.savefig(figfile, format='png', bbox_inches='tight')
figfile.seek(0) # rewind to beginning of file
print("Colouring text...", flush=True)
figfile1 = add_color(text_color, figfile)
print("Saving image...", flush=True)
image_out = 'pages/page{}.png'.format(curr_page)
with open(image_out, 'wb') as fl:
for x in figfile1:
fl.write(x)
from PIL import Image
img = Image.open(image_out)
img.load()
img = img.resize((int(img.size[0] * 0.8), int(img.size[1] * 0.804)), Image.ANTIALIAS)
# background = Image.new("RGB", img.size, (255, 255, 255))
background = Image.open('blank_page.jpg')
background.load()
background.paste(img, mask=img.split()[3], box=(30, 315)) # 3 is the alpha channel
background.save(image_out.replace('.png', '.jpg'), 'JPEG', quality=100)
print("\nPage No. {} done!\n\n".format(curr_page), flush=True)
# Generate PDF
print('\nGenerating PDF...', end='')
from PIL import Image
img1 = Image.open('pages/page1.jpg')
im_list = [Image.open('pages/page{}.jpg'.format(i)) for i in range(2, curr_page + 1)]
img1.save(args.output, "PDF", resolution=100.0, save_all=True, append_images=im_list)
print("done\n\nSuccessfully generated handwritten pdf from text at :\n{}".format(args.output))
return args.output
import argparse
import os
parser = argparse.ArgumentParser()
parser.add_argument('--model', dest='model_path', type=str, default=os.path.join('pretrained', 'model-29'),
help='(optional) DL model to use')
parser.add_argument('--text', dest='text', type=str, help='Text to write')
parser.add_argument('--text-file', dest='file', type=str, default=None, help='Path to the input text file')
parser.add_argument('--style', dest='style', type=int, default=0, help='Style of handwriting (1 to 7)')
parser.add_argument('--bias', dest='bias', type=float, default=0.9,
help='Bias in handwriting. More bias is more unclear handwriting (0.00 to 1.00)')
parser.add_argument('--force', dest='force', action='store_true', default=False)
parser.add_argument('--color', dest='color_text', type=str, default='0,0,150',
help='Color of handwriting in RGB format')
parser.add_argument('--output', dest='output', type=str, default='./handwritten.pdf',
help='Output PDF file path and name')
args = parser.parse_args()
if args.file:
text = open(args.file, 'r').read()
else:
text = args.text
if text is not None:
if len(text) > 50:
pass
else:
print("Text too short!")
exit()
else:
print("Please provide either --text or --text-file in arguments")
exit()
import pickle
import matplotlib
# import tensorflow as tf
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import generate
matplotlib.use('agg')
def main():
with open(os.path.join('data', 'translation.pkl'), 'rb') as file:
translation = pickle.load(file)
rev_translation = {v: k for k, v in translation.items()}
charset = [rev_translation[i] for i in range(len(rev_translation))]
charset[0] = ''
config = tf.ConfigProto(
device_count={'GPU': 0}
)
with tf.Session(config=config) as sess:
saver = tf.train.import_meta_graph(args.model_path + '.meta')
saver.restore(sess, args.model_path)
print("\n\nInitialization Complete!\n\n\n\n")
color = [int(i) for i in args.color_text.replace(' ', '').split(',')]
pdf = generate.generate(text.replace('1', 'I'), args, sess, translation, color[:3])
if __name__ == '__main__':
main()
File added
File added
imgs/example-1.PNG

20.3 KiB

imgs/example-2.gif

447 KiB

imgs/loss-plot.PNG

38.7 KiB

0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment