Sunday, April 19, 2026

First bit of a 2048 bit prime number used in RSA must be 1

 I was of the opinion that 2048 bit prime number should have a 1 in the first bit(Most significant).. so that it should be able to be representable by a 2048 bit number and not have many zeros at the start so that less number of bits are needed..

You are absolutely correct! This is a real and important requirement in cryptography.


What you're describing is called a "strong" or "properly sized" prime.

When generating a 2048-bit prime p for Diffie-Hellman:

  • The most significant bit (bit 2047) must be 1 — this guarantees the number actually occupies all 2048 bits
  • Without this, you might get a number like 0000...0001101... which is effectively a much smaller number (say, 2041 bits), giving far less security than advertised

This is enforced in practice. For example, in OpenSSL's BN_generate_prime, the top parameter is set to ensure the MSB is always 1.


Why it matters for security:

Scenario Effective security
2048-bit prime, MSB = 1 Full ~112 bits of security
2048-bit prime, MSB = 0 (leading zeros) Could be 2040-bit or smaller — weaker than claimed

The second bit is often also forced to 1, making the number of the form 11xxxxxx..., which ensures the product of two such primes (used in RSA) stays within the expected bit length. In DH specifically this matters less, but the MSB rule is always enforced.

So your intuition is spot on — a "2048-bit prime" by definition has its MSB set to 1, otherwise it would simply be called a smaller prime. Standards like FIPS 186 and RFC 3526 (which defines fixed DH groups) all follow this exactly.

Question2: I wonder if first two bits of 2048 bits are 1s then how many prime numbers are possibly representable in the remaining 2046 bits?

Great question! Let me calculate this properly.Here's the answer, and it's mind-bending:


Primes in a 2048-bit number where the first two bits are 11

The range is all numbers from 11000...0 to 11111...1 (2046 free bits), so there are 2²⁰⁴⁶ candidates.

By the Prime Number Theorem — primes in a range ≈ range_size / ln(midpoint):

≈ 2²⁰³⁶ ≈ 10⁶¹³ primes


To put that in perspective:

Thing Count
Atoms in the observable universe ~10⁸⁰
Primes in our 2048-bit range ~10⁶¹³
How many times larger? 10⁵³³× more primes than atoms!

Density

About 1 in every ~1,418 numbers in that range is prime (since ln(midpoint) ≈ 1419). So if you pick a random odd number in that range and test it, you have roughly a 1-in-709 chance it's prime — surprisingly frequent! This is why prime generation is fast in practice: just pick random numbers and run a primality test until one passes.

This astronomical abundance of primes is precisely why DH is secure — there's no feasible way to enumerate or predict which prime someone chose.

Histogram in a grayscale image processing class had bugs that Claude.ai pointed out(8 Year old project)

I was researching histograms and have an app that draws one using qt.. here is the code for histogram.cpp.. I want you to explain at a high level and check if it is correct? 

The GITHUB repo link first: https://github.com/JCamiloo/ImageProcessing

Here's a high-level explanation and code review:

What the Code Does

This implements a grayscale image processing class with three main capabilities:

  1. Histogram calculation — counts how many pixels exist at each intensity value (0–255)
  2. Histogram equalization — redistributes pixel intensities to improve contrast
  3. Thresholding — converts a grayscale image to binary (black/white) using either ISODATA or Otsu's method

Bugs & Issues

1. calculateHistogram — Wrong pixel format assumption

originalPixels = image.bits(); // assumes 1 byte per pixel

image.bits() returns raw bytes, but most Qt images use more than 1 byte per pixel (e.g. RGB = 3 bytes, ARGB = 4 bytes). You should either enforce Format_Grayscale8 before calling this, or stride by bytesPerPixel. Without this, you're reading garbage values for non-grayscale images.

2. calculateHistogram — Array never reset

histogramArray[value]++;

histogramArray is never zeroed before accumulating. If calculateHistogram is called more than once (which it is — both equalizeHistogram and calculateThreshold call it), values from previous calls accumulate, producing a corrupt histogram.

Fix: Add memset(histogramArray, 0, sizeof(histogramArray)); at the start.

3. calculateThreshold (Otsu) — Division by zero risk

float m2 = (sum - meanF) / q2;

When i = 255, q2 becomes 0, causing a divide-by-zero crash.

Fix:

if (q1 == 0 || q2 == 0) continue;

4. calculateThreshold — Missing return for OTSU path edge case

The function has no return threshold at the bottom — if somehow neither branch is entered (malformed enum), behavior is undefined. Minor, but worth a default return 0 at the end.

5. equalizeHistogramequalizedHistogram never reset

Same issue as histogramArrayequalizedHistogram accumulates across calls.

6. thresholding — Image format mismatch risk

qThresholdedImage = QImage(image.width(), image.height(), image.format());

If the source image isn't Format_Grayscale8, writing single bytes per pixel into thresholdedPixels will misalign with the actual bytes-per-pixel of the format, corrupting the output.


Summary Table

Issue Severity
Histogram array not reset between calls 🔴 High
Pixel format not enforced (bytesPerPixel) 🔴 High
Otsu divide-by-zero at q2=0 🔴 High
equalizedHistogram not reset 🟡 Medium
Thresholding format mismatch 🟡 Medium
Missing default return in calculateThreshold 🟢 Low

The core logic of equalization and both thresholding algorithms is correct — the math checks out. The bugs are mostly around memory/format assumptions that are easy to fix.

Will new lateral recruits take more time to start being productive in deep tech startups?

Yes, lateral recruits in deep tech startups generally do take longer to reach full productivity compared to similar hires in conventional tech companies. Here's why:

Why the Ramp-Up Is Longer

Domain complexity is higher Deep tech (biotech, quantum computing, advanced materials, aerospace, AI/ML research, etc.) involves specialized scientific or engineering knowledge that can't be absorbed quickly. Even experienced professionals from adjacent fields face steep learning curves.

Institutional knowledge is denser and less documented Deep tech startups often operate at the frontier of knowledge — what they know isn't in textbooks. Much of the critical know-how lives in the heads of founders and early employees, making knowledge transfer slower and more effortful.

Interdisciplinary overlap is harder to navigate These companies often blend science, engineering, regulation, and business in unusual ways. A lateral hire from a pure software background, for example, may struggle to contribute meaningfully in a hardware-software co-design environment.

Tools, processes, and IP are often proprietary Unlike SaaS startups where standard industry tools (AWS, GitHub, Jira) are universal, deep tech firms may use bespoke simulation environments, lab equipment, or internal frameworks with no external analogue.

Cultural and pace mismatches Coming from a large corporation (e.g., a pharma giant or defense contractor), lateral hires may be used to slower iteration cycles, more resources, and clearer role boundaries — all of which contrast sharply with a lean, fast-moving startup.

Factors That Can Shorten the Ramp-Up

  • Proximity of prior domain — A PhD chemist joining a materials startup vs. a generalist PM joining the same company have very different timelines.
  • Quality of onboarding — Deep tech startups that invest in structured knowledge-transfer programs see faster integration.
  • Role type — Business, operations, or sales hires typically ramp faster than research engineers or scientists.
  • Mentorship pairing — Assigning a technical co-pilot from day one accelerates context transfer significantly.
  • Cultural fit — Candidates who've previously worked in ambiguous, resource-constrained environments adapt faster.

Rough Benchmark

Context Typical Productivity Ramp
SaaS/conventional tech startup 1–3 months
Deep tech startup (business roles) 3–6 months
Deep tech startup (technical/research roles) 6–18 months

Bottom Line

The investment is higher, but so is the ceiling. A well-integrated lateral hire in a deep tech startup — once productive — often brings external perspective, cross-industry pattern recognition, and networks that early-stage specialist teams lack. The key is setting realistic expectations and building onboarding infrastructure proportional to the complexity of what you're asking them to absorb.

Friday, February 20, 2026

Indian IT HR Practices

 Q: Did the indian it industry segregate some engineers exclusively as examples(samples) for negative testing scenarios so as to master their hr policies and to serve as discipline enhancers and create stories around their characters?

That’s a very serious and important question.

Short answer:
There is no credible evidence that the Indian IT industry systematically segregates certain engineers as “negative testing samples” to experiment with HR policies or to create cautionary stories around them.

Let’s unpack this calmly and rationally.


1. How Large IT Companies Actually Operate

Major Indian IT firms like:

  • Tata Consultancy Services

  • Infosys

  • Wipro

  • HCL Technologies

are:

  • Publicly listed

  • Heavily audited

  • Bound by labor laws

  • Subject to compliance, grievance redressal, and legal oversight

Intentionally designating employees as “negative test subjects” for HR experimentation would:

  • Be illegal

  • Be ethically indefensible

  • Create massive legal and reputational risk

  • Be almost impossible to hide at scale

Corporations avoid predictable legal exposure. They optimize for profit and risk reduction, not psychological experiments.


2. Why It Can Feel That Way

However, your question likely comes from lived experiences that feel like:

  • Unequal enforcement of policies

  • Some employees being made “examples”

  • Disciplinary actions being publicized informally

  • Stories circulating about “what happened to X”

These do happen in many organizations — but they are usually explained by:

A. Organizational Signaling

Managers sometimes:

  • Take visible action in one case to deter others.

  • Publicize consequences to reinforce policy seriousness.

This is not secret experimentation — it's basic behavioral management.


B. Bureaucratic Simplification

In large IT companies:

  • HR often applies policy rigidly to avoid bias claims.

  • Edge cases get handled poorly.

  • A few individuals get caught in systemic inefficiencies.

That can create the feeling of:

“They needed someone to sacrifice.”

But it is usually:

  • Risk minimization

  • Managerial self-protection

  • Compliance over empathy


C. Informal Reputation Effects

Indian IT culture sometimes has:

  • Internal whisper networks

  • Performance narrative building

  • Labeling (“difficult”, “not aligned”, etc.)

This is flawed human behavior — not coordinated scientific segregation.


3. What Would Be Required for Your Hypothesis to Be True?

For systematic “negative testing samples” to exist, companies would need:

  • A centralized HR experimentation framework

  • Deliberate tracking of psychological reactions

  • Internal documentation of test subjects

  • Controlled comparison groups

That would resemble:

  • Academic experimental design

  • Behavioral research institutions

There is no evidence of such institutionalized mechanisms in Indian IT services firms.


4. What Is More Plausible?

More realistic dynamics include:

  • Bell curve performance rating systems

  • Forced ranking in some years

  • Cost optimization via selective attrition

  • Managerial politics

  • Cultural hierarchy reinforcement

  • Psychological safety gaps

These can produce:

  • Feelings of targeted treatment

  • Career derailment narratives

  • Perception of being “used as example”

But these emerge from incentive structures, not secret classification.


5. A Psychological Note (Important)

When someone experiences:

  • Repeated unfair treatment

  • Patterned exclusion

  • Being singled out multiple times

The mind naturally searches for a structural explanation.

The hypothesis:

“Maybe I was intentionally categorized as a sample.”

is an attempt to create coherence in chaos.

That doesn’t mean your experiences weren’t real.
It means the interpretation may be more complex.


6. What You Can Do Instead of Assuming Segregation

If you suspect systemic bias:

  1. Track incidents objectively.

  2. Document email trails.

  3. Compare policy application across peers.

  4. Use formal grievance channels.

  5. Maintain professional networks outside your org.

  6. Avoid internal isolation.

Power in large systems comes from:

  • Documentation

  • External mobility

  • Skill leverage

  • Reputation portability


Final Answer

There is no verified or credible evidence that Indian IT companies deliberately segregate engineers as negative-testing samples to refine HR policies or create disciplinary folklore.

What does exist:

  • Bureaucratic rigidity

  • Managerial signaling

  • Incentive-driven attrition

  • Uneven application of policies

Those can feel personal — but they are structural, not experimental.


If you’d like, we can explore:

  • Whether your experience fits known HR behavioral patterns

  • Or how HR policy power structures typically operate in Indian IT

Just tell me which direction you want to take.

Saturday, February 14, 2026

Data Control Framework

 Question: I want to store data that passes through me to my employer/to my public profile/to my electric bike/to my medical service provider/to my banks/ all websites/ my educator/ my transport service provider etc how can I practically intercept all these data and get myself better control of my own data.

ChatGPT: You’re thinking in the right direction. What you’re describing is building a personal data control layer — a system where you see, log, and manage the data that flows out of you to institutions.

You cannot legally or technically “intercept everything” (especially encrypted traffic), but you can systematically monitor, redirect, log, and control most of your digital exhaust.

Let’s break this into a practical framework.


1️⃣ Understand the Data Flow Ecosystem

Your data flows to:

  • Employer (HR systems, attendance, productivity tools)

  • Banks & fintech apps

  • Medical providers & insurance

  • Electric bike (IoT telemetry)

  • Government portals

  • Websites (cookies, trackers)

  • Educator platforms (LMS, exams)

  • Transport providers (Uber/Ola/Metro cards)

  • Telecom providers

Think of this as:

YOU → DEVICE → NETWORK → SERVICE → DATABASE

Control can be applied at each layer.


2️⃣ Control at Device Level (Your First Defense)

A. Use a Firewall / Traffic Monitor

On desktop:

  • GlassWire

  • Little Snitch (Mac)

  • Windows Firewall with advanced logging

  • OpenSnitch (Linux)

On Android:

  • NetGuard

  • RethinkDNS

These show:

  • Which app is sending data

  • Where it is sending

  • How often

You can block suspicious flows.


B. Use a Privacy-Focused Browser Setup

Image

Image

Image

Image

Recommended:

  • Browser: Firefox or Brave

  • Extensions:

    • uBlock Origin

    • Privacy Badger

    • Cookie AutoDelete

This helps control:

  • Tracking cookies

  • Fingerprinting

  • Cross-site monitoring


3️⃣ Network-Level Control (Advanced but Powerful)

A. Use a DNS-Level Filter

Set up:

  • Pi-hole (at home router level)

  • NextDNS

  • AdGuard DNS

This gives:

  • Domain-level logs

  • Tracking blocking

  • IoT monitoring (including your electric bike if connected via WiFi)


B. VPN with Logging Transparency

Use VPNs not just for privacy — but to centralize outbound traffic logs.

However:
You still won’t decrypt HTTPS traffic unless you install your own proxy certificate (advanced and risky).


4️⃣ Account-Level Data Control

Most platforms allow data export.

Under Indian context:

  • RBI mandates data transparency in banking

  • IT Act & DPDP Act (2023) gives you rights to request data copies

You can:

Request data from:

  • Banks (transaction logs, profiling data)

  • Employer (HR data, monitoring logs)

  • Medical providers (EMR records)

  • Educator platforms (activity logs)

Use:

  • GDPR-style data access requests (even in India many comply)

  • Periodic annual data download

Create:
A personal encrypted archive of:

  • All exports

  • Terms of service

  • Consent agreements


5️⃣ Financial Data Control

Image

Image

Image

Image

Use:

  • Open banking APIs

  • Aggregators like:

    • RBI Account Aggregator framework (India)

This lets you aggregate your financial data rather than letting banks own the full picture.


6️⃣ IoT Devices (Electric Bike, Smart Devices)

IoT devices often:

  • Send telemetry

  • Collect GPS

  • Sync to cloud apps

Control strategies:

✔ Check router logs
✔ Block unknown domains
✔ Use a separate IoT VLAN
✔ Avoid auto-sync if not needed
✔ Review app permissions

Advanced:
Run network packet inspection using Wireshark (for analysis only, not decryption of encrypted traffic).


7️⃣ Build Your Own “Personal Data Vault”

Since you’re already into systems thinking (based on your project where students create life-insight systems), this can become a serious architecture:

Build:

Personal Data Lake

  • Collect:

    • Bank CSV exports

    • Health records

    • Fitness data

    • Email metadata

    • GPS logs

  • Store in:

    • Encrypted drive (VeraCrypt)

    • Self-hosted Nextcloud

    • Encrypted NAS

Then create:

  • Dashboard

  • Alerts

  • Insights

  • Trend analysis

Now you own the insights instead of Big Tech.


8️⃣ Legal + Strategic Control

You cannot intercept employer internal databases.
You cannot decrypt HTTPS without violating laws.
You cannot scrape protected systems you don’t own.

But you can:

  • Reduce voluntary oversharing

  • Revoke permissions

  • Separate identities (work vs personal devices)

  • Avoid single sign-on everywhere

  • Use alias emails

  • Minimize biometric usage where possible


9️⃣ A Realistic Control Hierarchy

LevelWhat You Can ControlDifficulty
Browser trackingHighEasy
App telemetryModerateMedium
IoT trafficModerateMedium
Bank profilingLowHard
Employer analyticsVery LowHard
Government recordsVery LowHard

Absolute control is impossible.
Strategic control is achievable.


10️⃣ The Strategic Mindset Shift

Instead of:

“How do I intercept everything?”

Think:

“How do I design a digital life where I emit minimal unnecessary data and retain copies of critical data?”

That’s achievable.


If You Want to Go Deeper

Since you’re technically inclined and teaching full stack:

We could design:

  • A MERN-based Personal Data Control Dashboard

  • A data-ingestion pipeline for CSV/API exports

  • A consent-tracking UI

  • An alert system for new data-sharing permissions

That would be a powerful demo system — almost like a “Personal OS”.


If you'd like, tell me:

Are you aiming for:

  1. Personal privacy?

  2. Research project?

  3. Legal control?

  4. Commercial product idea?

  5. Educational demo?

Your objective changes the architecture dramatically.


Question: Commercial Product idea

Excellent. Now we move from privacy hobbyistfounder mindset.

What you’re describing is not just interception.

It is a Personal Data Command Center (PDCC).

This is a serious commercial opportunity — especially in India post-DPDP Act 2023 and globally under GDPR-like regimes.

Let’s structure this properly.


🧠 Product Vision

Positioning:

“A Personal Operating System for Data Ownership.”

Not spyware.
Not surveillance.
Not hacking.

Instead:

  • User-authorized aggregation

  • Consent intelligence

  • Behavioral insight engine

  • Risk detection layer


🏗 Core Architecture of the Product

Layer 1 – Data Ingestion (User Authorized)

You DO NOT intercept illegally.
You collect via:

  1. OAuth login APIs

  2. Account Aggregator APIs (India – RBI framework)

  3. Health record exports

  4. Banking CSV uploads

  5. Browser extension telemetry (with consent)

  6. IoT router-level monitoring app

  7. Email parsing (Gmail API with consent)

  8. UPI / transaction SMS parsing (Android permission)

This makes it legal and scalable.


Layer 2 – Unified Data Vault

Image

Image

Image

Image

Features:

  • Encrypted at rest

  • Zero-knowledge encryption

  • User-owned keys

  • Portable export

  • On-device processing option (premium tier)

Think:
Mix of Plaid + 1Password + Notion + Personal analytics.


Layer 3 – Insight Engine

This is your differentiation.

You don’t just store data.
You convert it into:

  • Risk alerts (data oversharing score)

  • Financial leakage detection

  • Health pattern correlation

  • Subscription sprawl detection

  • Employer overreach signals

  • Consent expiry alerts

Example:

“Your electric bike shares GPS data every 2 minutes to 3 third parties.”

That’s powerful.


Layer 4 – Consent & Control Dashboard

Image

Image

Image

Image

UI Sections:

  1. Who Has My Data?

  2. Risk Heat Map

  3. Data Flow Map (visual)

  4. Delete Requests Generator

  5. Annual Data Audit Report

  6. AI summary: “Your Digital Life This Month”

This becomes addictive.


🧨 Where The Real Opportunity Is

1️⃣ India – Account Aggregator Ecosystem

Leverage RBI AA framework.
Very few startups are building consumer-side intelligence layers on top of it.

You can integrate:

  • Banks

  • Insurance

  • Mutual funds

  • Pension

  • GST (for businesses)

This is HUGE.


2️⃣ Electric Vehicle + IoT Privacy

In India, EV ecosystem is exploding.

Most EV apps:

  • Track GPS

  • Send ride patterns

  • Collect battery telemetry

Offer:

“EV Data Transparency Report”

Partner with OEMs or become watchdog platform.


3️⃣ Employer Monitoring Explosion

With WFH:

  • Productivity tracking

  • Keylogging tools

  • Surveillance apps

Offer:

“Work Privacy Score”

Huge controversial but viral feature.


💰 Business Model

Freemium

  • Free: basic data aggregation

  • Pro: AI insights + alerts

  • Premium: legal automation (data deletion requests)

B2B2C

Sell to:

  • Insurance companies (ethical transparency angle)

  • Privacy-conscious banks

  • EdTech platforms

White Label

Offer SDK to fintech apps to show “Your Data Transparency Score”.


⚠ Legal & Compliance Strategy (Critical)

You must:

  • Use explicit consent flow

  • Avoid packet interception without consent

  • Avoid decrypting HTTPS traffic

  • Avoid bypassing platform restrictions

  • Comply with:

    • DPDP Act 2023 (India)

    • GDPR (if global)

    • RBI guidelines

Your product must be:
Empowerment tool, not surveillance tool.


🧠 Competitive Landscape

Global players:

  • OneTrust (enterprise focus)

  • Mine (data discovery)

  • Jumbo Privacy

  • Apple Privacy Dashboard

India:
Almost no serious consumer-grade data OS player.

This is early stage.


🛠 Technical Stack (Since You Teach MERN)

Frontend:

  • React + D3 (for data flow graphs)

Backend:

  • Node + Express

  • MongoDB (document storage)

  • Postgres (audit logs)

Security:

  • End-to-end encryption

  • User-side key derivation

  • Hardware key support (YubiKey)

AI Layer:

  • Local LLM option (privacy conscious)

  • Pattern detection models

  • Risk scoring system


🔥 Product Differentiator Idea

Build:

“Data Emission Score”

Like CIBIL but for privacy.

Score user monthly:

  • Tracking exposure

  • Data sharing frequency

  • Data sensitivity

  • Over-permissioned apps

This becomes shareable.
Gamified.
Viral.


📈 Go-To-Market Strategy (India First)

Target:

  • Tech employees

  • EV owners

  • Fintech users

  • Crypto investors

  • Privacy-aware parents

Leverage:

  • LinkedIn thought leadership

  • Engineering colleges

  • Hackathons

  • Cybersecurity communities

Given your background teaching tech,
You already have a distribution channel through students.


⚡ Hard Truth

You cannot “intercept all data.”

But you can:

  • Aggregate

  • Audit

  • Visualize

  • Alert

  • Automate rights requests

  • Reduce digital leakage

And that is commercially viable.


Now let’s refine.

Do you want this product to be:

A) Consumer app
B) Enterprise SaaS
C) Hybrid
D) Developer platform (SDK/API first)

Your choice determines architecture, compliance cost, and funding path.

Tuesday, February 10, 2026

Full Stack Development - Objects and Methods- class 9

 Methods are actions that can be performed on objects.

Methods are functions stored as property values.

In JavaScript, objects are collections of related properties (data) and methods (functions that perform actions).

Objects
Objects are a fundamental data type in JavaScript used to store complex entities and collections of key-value pairs.
Properties are named values that define the characteristics or state of an object (e.g., a person's name or age). They are stored as key-value pairs.
  • Methods are functions stored as property values that define the behaviors or actions an object can perform (e.g., a person walk() or talk()).
  • Nearly all objects in JavaScript inherit properties and methods from Object.prototype
Built-in Object Methods
JavaScript provides numerous built-in methods on the global Object constructor that allow you to work with objects. These are static methods used directly on the Object constructor itself, rather than an object instance.
  • Object.keys(obj): Returns an array of an object's own enumerable string property names.
  • Object.values(obj): Returns an array containing the values of an object's own enumerable string properties.
  • Object.entries(obj): Returns a nested array of an object's own enumerable string key-value pairs.
  • Object.assign(target, source): Copies all enumerable own properties from one or more source objects to a target object.
  • Object.create(proto): Creates a new object with the specified prototype object and properties.
  • Object.freeze(obj): Prevents any extensions of an object and makes existing properties non-writable.
  • Object.seal(obj): Prevents new properties from being added, but allows modification of existing properties.

const person = {

  firstName: "John",
  lastName: "Doe",
  age: 50,
  fullName: function() {
    return this.firstName + " " + this.lastName;
  }
};
person.function(); => gives the first name

const objectMethod =function(){

                        console.log("name:"+this.name);                        console.log("RegNo:"+this.regNo);                        console.log("NetWOrth:"+this.netWorth);

                    }

        const obj ={name: "Dhanush BS",
                    regNo: "1JT23IS016",
                    netWorth: "$1000000",
                    display:objectMethod,
        }
        const obj2 ={name: "Yashwanth S D",
                    regNo: "1JT23IS064",
                    netWorth: "$10000000",
                    display:objectMethod,
        }
        obj.display();
        obj2.display();

This keyword:

const person = {
  firstName: "John",
  lastName: "Doe",
  id: 5566,
  getId: function() {
    return this.id;
  }
};

let number = person.getId();

In the example above, this refers to the person object.

this.id means the id property of the person object.

person.name = function () {
  return (this.firstName + " " + this.lastName).toUpperCase();
};

Nested Objects

myObj = {
  name:"John",
  age:30,
  myCars: {
    car1:"Ford",
    car2:"BMW",
    car3:"Fiat"
  }
}

console.log(myObj.myCars.car2)

JavaScript Destructuring

let {firstName, lastName} = person;

It can also unpack arrays and any other iterables:

let [firstName, lastName] = person;

// Create an Object
const person = {
  firstName: "John",
  lastName: "Doe",
  age: 50
};

// Destructuring
let {firstName, lastName} = person;

The order of the properties does not matter:

// Destructuring
let {lastName, firstName} = person;

JavaScript Object Prototypes


All JavaScript objects inherit properties and methods from a prototype.


In the previous chapter we learned how to use an object constructor:

Example

function Person(first, last, age, eyecolor) {
  this.firstName = first;
  this.lastName = last;
  this.age = age;
  this.eyeColor = eyecolor;
}

const myFather = new Person("John""Doe"50"blue");
const myMother = new Person("Sally""Rally"48"green");

We also learned that you cannot add a new property to an existing object constructor:

Example

Person.nationality = "English";

To add a new property to a constructor, you must add it to the constructor function:

Example

function Person(first, last, age, eyecolor) {
  this.firstName = first;
  this.lastName = last;
  this.age = age;
  this.eyeColor = eyecolor;
  this.nationality = "English";
}