EN / TR
[06] JOURNAL

Journal

Curated dispatches from the intersection of architecture, AI, and computation.

Industry news, tool deep-dives, and studio notes from Fraktal.

16 articles
From the Studio

Where Architecture Meets AI

Fraktal explores the intersection of computational design, artificial intelligence, and architectural practice. Here we share our research notes, tool reviews, industry news, and perspectives on how emerging technology reshapes the built environment.

0 Articles
0 Events
0 Categories

Upcoming Events

Mar 16
NVIDIA GTC 2026 San Jose, CA
AI & GPU

GPU technology, Omniverse & digital twins for AEC

Fraktal will be there
Mar 18
AI in AEC 2026 Helsinki, Finland
AI & AEC

AI-driven project delivery, robotics & data-centric design

Apr 22
MIPIM PropTech 2026 Cannes, France
PropTech

PropTech innovation, real estate & smart buildings

May 15
Design Week Turkiye Istanbul, Turkey
Design

Design innovation, digital culture & creative industries

Fraktal will be there
May 19
Google I/O 2026 Mountain View, CA
AI & Dev

AI models, Gemini APIs & design tooling updates

May 20
Future of Construction Zurich, Switzerland
Comp. Design

Digital fabrication, robotics & augmented design at ETH

Fraktal will be there
May 24
Venice Architecture Biennale Venice, Italy
Architecture

Global architecture exhibition & critical discourse

Fraktal will be there
Jun 10
AIA Conference on Architecture San Diego, CA
Architecture

Architecture & design innovation, sustainability & technology

Sep 07
eCAADe 2026 Lubeck, Germany
Comp. Design

Computer aided architectural design in Europe

Fraktal will be there
Sep 08
Istanbul Design Biennial Istanbul, Turkey
Design

Contemporary design, spatial research & installations

Fraktal will be there
Sep 15
Autodesk University 2026 Las Vegas, NV
Comp. Design

BIM, generative design & Revit/Rhino integrations

Fraktal will be there
Oct 15
World Architecture Festival Singapore
Architecture

Global architecture awards, keynotes & masterclasses

Oct 22
ACADIA 2026 Detroit, MI
Comp. Design

Computational design, digital fabrication & making

Fraktal will be there
AI & Tech 4 min
Apple Vision Pro in Architecture: One Year Later

Apple Vision Pro in Architecture: One Year Later

It has been one year since Apple Vision Pro launched, and the architecture community has had enough time to separate hype from utility. We have been using it in our studio since month two, and the verdict is nuanced.

What works brilliantly: client walkthroughs. Placing clients inside an unbuilt space at 1:1 scale remains magical. We have seen a measurable drop in change orders, roughly 40% fewer on projects where we used Vision Pro for design review versus traditional screen-based presentations. Clients simply understand space better when they stand inside it.

What also works: collaborative markup. Multiple team members in different cities can stand in the same virtual room, point at walls, and discuss proportions. We use this weekly between our Ankara and Istanbul teams.

What does not work yet: design iteration. The Vision Pro is a consumption device, not a creation device. You cannot meaningfully sketch, model, or parametrically adjust anything while wearing it. The workflow is still Rhino/Grasshopper on a screen, then export to Vision Pro for review, then back to the screen for changes. That round-trip friction is real.

The hardware itself is heavy for long sessions, call it 45 minutes maximum before fatigue. And at $3,500, outfitting an entire team is a significant investment. Meta Quest 3 at $500 covers 80% of the use cases at 15% of the cost.

Our recommendation for studios considering spatial computing: start with Meta Quest 3 for internal design review. Reserve Apple Vision Pro for high-stakes client presentations where the visual fidelity justifies the cost. And wait for the next hardware generation before going all-in.

Comp. Design 4 min
Rhino 9 Announced: SubD, AI Assist, and Real-Time Collaboration

Rhino 9 Announced: SubD, AI Assist, and Real-Time Collaboration

McNeel dropped the first public preview of Rhino 9 at the AEC Technology Symposium, and it is the biggest release since Rhino 5 introduced Grasshopper as a built-in feature. Three headlines stand out.

First, SubD gets a complete overhaul. The new SubD engine supports adaptive subdivision with crease control that actually works for architectural detailing. We have been asking for this since SubD shipped in Rhino 7, and it looks like McNeel listened. Early demos showed facade panels with complex curvature transitioning smoothly to planar edges, all within a single SubD object.

Second, Rhino AI Assist. This is McNeel's answer to the explosion of AI tools in the design space. It is a built-in natural language interface that can generate Grasshopper definitions, explain existing scripts, and suggest geometric operations. It is not a wrapper around ChatGPT; McNeel trained their own model on Grasshopper component documentation and community scripts. Early demos were impressive but limited to relatively simple definitions. For complex multi-objective workflows, you will still need to know what you are doing.

Third, real-time collaboration. Multiple users can work on the same Rhino file simultaneously, with live cursor tracking and conflict resolution. This is Google Docs for 3D modeling. For distributed teams like ours, this could eliminate the painful file-sync rituals we do every morning.

The release is expected Q4 2026. We will be running the WIP builds as soon as they are available and will share detailed benchmarks.

Comp. Design 4 min
The 10 Grasshopper Plugins That Changed Our Workflow in 2026

The 10 Grasshopper Plugins That Changed Our Workflow in 2026

The Grasshopper plugin ecosystem has quietly evolved from a handful of essential tools into a massive landscape of specialized components. After testing dozens of new releases this year, here are the ten that permanently changed how we work.

Wallacei X remains our go-to for multi-objective optimization. The latest update added real-time Pareto front visualization and integration with external Python objectives. We use it on every project that involves performance-driven form-finding.

Telepathy is the newcomer we are most excited about. It creates live data links between Grasshopper instances running on different machines, enabling distributed parametric workflows. We run heavy environmental simulations on a dedicated workstation while the design team iterates on form on their laptops.

Human UI 2.0 turned our Grasshopper definitions into proper desktop applications. We build client-facing dashboards that let stakeholders adjust design parameters without ever seeing the spaghetti.

Karamba3D 2.2 added nonlinear structural analysis. This matters because real buildings do not behave linearly under extreme loads. We can now run buckling analysis and plastic hinge formation studies directly in Grasshopper.

Heteroptera, Metahopper, Elefront, Lunchbox, Pufferfish, and Anemone round out our daily toolkit. Each solves specific pain points in data management, geometry processing, and iterative workflows.

The ecosystem is what makes Grasshopper irreplaceable. Despite competition from Dynamo and emerging visual programming environments, no platform comes close to this depth of community-built tools.

Architecture 5 min
Zaha Hadid CODE: Inside the Firm's Computational Department

Zaha Hadid CODE: Inside the Firm's Computational Department

Zaha Hadid Architects is often discussed for its formal language, but the real story is computational. ZH CODE, the firm's research and development group, employs over 30 full-time computational designers who build the tools that make those famous geometries buildable.

What makes ZH CODE unique is not just scale but integration. Unlike most firms where the computational team sits adjacent to design, ZH CODE members are embedded in project teams from day one. The parametric model is not a post-rationalization tool; it is the primary design medium.

Their tech stack is instructive: Rhino + Grasshopper as the core platform, with extensive custom C# and Python scripting. For structural optimization, they use a combination of Karamba3D and proprietary finite element tools. Environmental analysis runs through a custom pipeline built on top of Ladybug Tools. Fabrication rationalization uses their own clustering algorithms for panelization.

The insight for smaller studios: ZH CODE did not start with 30 people. It grew from a handful of designers who wrote scripts to solve specific project problems. The tools accumulated over two decades into a comprehensive platform. The lesson is that computational capability is built project by project, not purchased off the shelf.

What we find most interesting is their recent investment in machine learning. ZH CODE is training neural networks on their own project archive, roughly 1,200 projects spanning 25 years, to identify patterns in their design decision-making. The goal is not to automate design but to create a computational memory that can suggest relevant precedents when designers face similar challenges.

For firms of our scale, the takeaway is clear: every script you write, every algorithm you develop, is an asset that compounds over time. Build your computational library deliberately.

Architecture 4 min
SHoP Architects' Digital Practice: Lessons for Small Studios

SHoP Architects' Digital Practice: Lessons for Small Studios

SHoP Architects has long been the poster child for digitally integrated practice. They own their fabrication pipeline, build custom software tools, and have spun off technology companies from their R&D work. For a small studio, that sounds impossibly ambitious. But the underlying principles are surprisingly transferable.

Lesson one: own your data pipeline. SHoP's competitive advantage is not any single tool but the seamless flow of data from design through fabrication. At our scale, this means scripting the connections between Rhino, structural analysis, and fabrication output rather than manually exporting and re-importing between software. An afternoon spent writing a Grasshopper-to-Tekla bridge saves weeks of manual coordination across multiple projects.

Lesson two: prototype with real materials early. SHoP is famous for building physical prototypes of complex assemblies before finalizing digital models. You do not need a fabrication lab to do this. We use local CNC services and 3D printing to test joinery details at 1:1 scale. The cost is marginal compared to discovering construction issues on site.

Lesson three: document everything computationally. SHoP does not just design buildings; they design the processes that design buildings. Every project produces reusable algorithms, fabrication templates, and analysis workflows. We have adopted this mindset: project deliverables include not just drawings and models but the Grasshopper definitions and scripts that generated them.

The scale difference between a 500-person firm and a 5-person studio is real, but the digital mindset scales down gracefully. Start with the data pipeline, build your script library, and prototype physically. These three habits compound.

Fraktal Notes 4 min
How We Use Claude to Review Parametric Definitions

How We Use Claude to Review Parametric Definitions

Six months ago, we started a quiet experiment: feeding our Grasshopper definitions to Claude for code review. Not for generation, but for critique. The results changed how our studio works.

The workflow is simple. We serialize a Grasshopper definition into a structured text format, listing every component, its inputs, outputs, and connections. We pass this to Claude with a prompt asking for structural review: unused components, potential null reference errors, inefficient data tree operations, and opportunities for simplification.

Claude catches things we miss. In one recent project, it identified that a Dispatch component downstream of a Cull Pattern was redundant because the culling already filtered the data the Dispatch was supposed to separate. Removing it simplified the definition and eliminated a subtle performance issue where the tree structure was being needlessly duplicated.

It is also effective at suggesting alternative approaches. When we described a complex point attractor setup using multiple Closest Point components, Claude suggested replacing the entire sequence with a single Pull Point operation, reducing 12 components to 3 and improving execution time by roughly 60%.

The limitations are real. Claude does not understand geometry spatially. It cannot tell you if a surface is self-intersecting or if a boolean operation will fail on a specific input. It works at the data-flow level, treating Grasshopper as a visual programming language, which is exactly what it is.

We now run this review on every definition before it goes into production. It takes 10 minutes and has caught enough issues to justify the workflow permanently. We are exploring building this into Falcon AI as an automated review feature.

Fraktal Notes 4 min
Our Ankara Studio Setup: The Tools We Actually Use

Our Ankara Studio Setup: The Tools We Actually Use

Every studio publishes what tools they aspire to use. Here is what we actually open every day. No aspirational mentions of software we tried once and abandoned.

Hardware: Two workstations running Windows 11 with RTX 4080 GPUs, 64GB RAM, NVMe storage. One dedicated render/simulation machine with RTX 4090 that runs overnight jobs. MacBook Pros for on-site work and presentations. Our philosophy: invest in GPU, not CPU. Almost everything we do is GPU-accelerated now.

Core modeling: Rhino 8 exclusively. We stopped using Revit for design work two years ago. For BIM deliverables, we export from Grasshopper to IFC and let the project engineers handle the Revit coordination. This is controversial in the industry, but it tripled our design iteration speed.

Parametric: Grasshopper with a core plugin set. Wallacei for optimization, Karamba3D for structural, Ladybug/Honeybee for environmental, Kangaroo for physics, Human UI for dashboards, and our own Falcon AI for AI assistance. We avoid installing more than 20 plugins to keep definitions portable and stable.

Visualization: Enscape for quick renders, Twinmotion for marketing materials, Unreal Engine 5 for VR walkthroughs and cinematic sequences. We deprecated V-Ray last year because Enscape covers 95% of our rendering needs at a fraction of the setup time.

Communication: Discord for team chat, Notion for project documentation, Speckle for model sharing, GitHub for version control on scripts and definitions.

AI tools: Claude for code review and writing, Midjourney for concept ideation, Stable Diffusion XL for custom-trained architectural visualization models.

The total monthly subscription cost for the studio: approximately 800 EUR. Software has never been cheaper relative to its capability. The bottleneck is expertise, not tools.

Fraktal Notes 5 min
SpaceCraft: How LiDAR Room Scanning Changes Interior Design

SpaceCraft: How LiDAR Room Scanning Changes Interior Design

SpaceCraft started as a question: what if capturing an existing room took minutes instead of hours? After six months of development with Apple's RoomPlan API and ARKit, we have an answer. It does, and it changes the design workflow more than we expected.

The core technology is deceptively simple. Apple's LiDAR sensor fires millions of infrared dots to build a depth map. RoomPlan interprets this data to identify walls, doors, windows, and furniture. SpaceCraft wraps this in an interface designed specifically for architects and interior designers, adding measurement annotation, material tagging, and direct export to Rhino, SketchUp, and IFC formats.

Accuracy is the first question everyone asks. In our testing across 40+ rooms ranging from 8 sqm studio apartments to 120 sqm open-plan offices, wall placement accuracy averages 2-3 cm deviation from manual measurement. That is well within tolerance for schematic design and renovation planning. For construction documents, you still need a total station, but for design intent and client communication, SpaceCraft data is production-ready.

The workflow impact surprised us. A typical residential renovation starts with a site visit: two people, laser distance meter, graph paper, two to three hours. With SpaceCraft, one person walks through the space in 8-12 minutes and leaves with a 3D model. The time savings compound: no transcription errors, no missing measurements discovered back at the office, no second site visits.

Export flexibility was a design priority. Architects work in different ecosystems: some in Rhino, some in SketchUp, some in Revit via IFC. SpaceCraft exports to all three, preserving room topology and furniture placement. The Grasshopper integration is particularly powerful, as scanned rooms can feed directly into parametric renovation studies.

We are now working on SpaceCraft 2.0, which will add automatic material detection using on-device machine learning. The camera identifies flooring type, wall finish, and ceiling material during the scan, eliminating manual specification. Early prototypes are promising, achieving roughly 85% accuracy on common residential materials.

Fraktal Notes 5 min
Archly.ai: Building an AI Design Assistant That Architects Actually Use

Archly.ai: Building an AI Design Assistant That Architects Actually Use

When we started building Archly.ai, we established one rule: it must solve problems architects actually have, not problems AI companies think architects have. That distinction shaped every design decision.

The core insight: architects do not need AI to generate buildings. They need AI to handle the tedious analytical work that consumes 60% of design time but contributes 0% of design value. Zoning compliance checking, unit mix optimization, parking ratio calculations, adjacency analysis, these are the tasks that drain creative energy.

Archly works by ingesting project constraints (site boundaries, zoning regulations, program requirements) and producing quantified design options. It does not draw buildings. It tells you: given your constraints, here are the 12 viable massing configurations ranked by floor area efficiency, daylight access, and construction cost. The architect then designs within the most promising option space.

The technical architecture combines three AI layers. A constraint solver handles regulatory compliance, ensuring every generated option meets zoning, fire code, and accessibility requirements. A generative model explores the geometric possibility space within those constraints. And an evaluation engine scores each option against user-defined performance criteria.

We trained the evaluation engine on a dataset of 800+ completed projects from Turkish architectural competitions, learning the implicit relationships between plan geometry and jury scores. This does not mean Archly designs like a competition winner. It means Archly can flag when a spatial configuration has characteristics that correlate with poor evaluation: dead-end corridors, insufficient cross-ventilation potential, or unbalanced unit distributions.

Adoption surprised us. We expected academic interest but limited practitioner uptake. Instead, three mid-size Turkish firms are using Archly in production. Their feedback is consistent: Archly saves roughly 2-3 weeks per project in the feasibility study phase by eliminating manual massing iterations. The design quality does not improve because of AI. It improves because architects spend more time on design and less time on arithmetic.

AI & Tech 4 min
AI-Driven Floor Plan Generation: Where We Are in 2026

AI-Driven Floor Plan Generation: Where We Are in 2026

The AI floor plan generation landscape has changed dramatically since ArchiGAN first demonstrated that neural networks could produce plausible residential layouts. Three years later, the technology is both more capable and more honestly understood.

The current state of the art uses diffusion models, the same architecture behind Stable Diffusion, adapted for architectural plan generation. These models can produce floor plans that satisfy basic spatial constraints: rooms connect through doors, corridors provide circulation, and wet areas cluster near plumbing risers. The visual quality is impressive, but the architectural quality remains questionable.

The fundamental challenge is that a floor plan is not an image. It is a topological graph with geometric embedding, structural constraints, MEP routing requirements, fire egress compliance, and accessibility standards. Current AI models excel at the visual pattern but struggle with the engineering reality. A generated plan might look plausible but place a load-bearing wall where no column exists below, or route a corridor that violates fire egress width requirements.

The most promising approaches combine AI generation with rule-based validation. Systems like the one we built for Archly.ai generate candidate layouts using AI, then filter them through a constraint satisfaction engine that checks structural feasibility, code compliance, and spatial quality metrics. Only layouts that pass all checks reach the architect. This hybrid approach produces fewer options but dramatically higher quality ones.

For practicing architects, the practical advice is: use AI floor plan tools for early-stage exploration, not for production drawings. They are excellent at expanding the design possibility space, showing configurations you might not have considered. But every AI-generated plan needs professional review for buildability, code compliance, and spatial quality. The tool is a starting point, never a final answer.

Architecture 4 min
BIG and the Algorithm: How Data Informs Design at Bjarke Ingels Group

BIG and the Algorithm: How Data Informs Design at Bjarke Ingels Group

Bjarke Ingels Group has built a reputation on formally striking buildings that feel inevitable, as if the design could not have been anything else. What is less discussed is how much of that inevitability is computationally constructed.

BIG's design process starts with what they call "information architecture": assembling every quantifiable constraint into a parametric model before any form is generated. Sun angles, wind patterns, view corridors, zoning setbacks, program requirements, and circulation flows are all encoded as data layers. The design emerges from the intersection of these constraints, not from a sketch on a napkin.

The Mountain Dwellings project in Copenhagen is a canonical example. The cascading residential units are not a formal gesture; they are the geometric result of optimizing for three simultaneous constraints: every unit gets southern sun exposure, every unit has a view of the Oresund strait, and a parking structure fills the northern volume. The "mountain" form is the only shape that satisfies all three.

BIG's computational team, roughly 15 people working across offices in Copenhagen, New York, London, and Barcelona, builds custom tools for each project. They use Grasshopper for geometric exploration, custom Python scripts for environmental analysis, and proprietary optimization algorithms for complex multi-objective problems. The team structure is similar to ZH CODE: embedded in project teams from concept through construction.

The lesson for smaller practices: BIG proves that computation does not have to mean parametric formalism. Their buildings do not look "computational." They look simple, even obvious. The computation is invisible, buried in the analysis that proves the obvious solution is also the optimal one. That is a powerful model: use computation to validate and refine design intuition, not to replace it.

AI & Tech 4 min
OpenAI o3-mini: What Reasoning Models Mean for Architectural Practice

OpenAI o3-mini: What Reasoning Models Mean for Architectural Practice

OpenAI released o3-mini in January 2026, and the architecture community should be paying attention. Not because it generates floor plans, it does not. Because it reasons through complex multi-constraint problems in ways that directly map to architectural analysis.

We tested o3-mini on three real problems from our studio practice. First, a zoning compliance check: given Istanbul's imar yonetmeligi for a commercial zone, can a 7-story mixed-use building with 60% ground coverage satisfy parking, setback, and FAR requirements simultaneously? o3-mini worked through the calculation correctly, identifying the FAR conflict that our junior architect missed during initial feasibility.

Second, we tested structural intuition: given a 12-meter clear span with residential loading, what beam depth is reasonable for steel, glulam, and reinforced concrete? o3-mini provided ranges that matched our structural engineer's preliminary sizing within 10%.

Third, specification writing: generate a technical specification for curtain wall insulated glass units meeting Istanbul's climate zone requirements. The output was surprisingly detailed, correctly referencing U-value targets and appropriate glass coatings.

The pattern is clear: reasoning models excel at tasks that require combining domain knowledge with logical deduction. They struggle with spatial reasoning, geometric manipulation, and aesthetic judgment. The creative work remains human territory.

AI & Tech 5 min
Claude for Code Review: How We Use AI to Debug Grasshopper Scripts

Claude for Code Review: How We Use AI to Debug Grasshopper Scripts

Six months ago, we started using Claude to review our Grasshopper Python and C# script components. The results have been good enough that it is now part of our standard workflow.

The problem we were solving: our studio maintains over 200 Grasshopper definitions across active projects. When a definition breaks, understanding the logic takes significant time. Comments are sparse. Variable naming is inconsistent.

Claude excels at code comprehension. Paste in a Python script component and ask "what does this do, step by step?" and it produces accurate, detailed explanations roughly 90% of the time. For C# script components, accuracy drops to about 75%.

Where Claude genuinely saves us time is in bug detection. We added a workflow step: before committing any modified script component, paste the before and after versions into Claude and ask it to identify potential issues. It catches off-by-one errors, type mismatches, and null reference risks consistently.

Claude fails at visual-spatial reasoning. It cannot evaluate whether a curve offset will self-intersect. And it does not understand Grasshopper's dataflow paradigm. The practical advice: treat Claude as a surprisingly competent junior developer who has read all the documentation but has never actually used Grasshopper.

AI & Tech 4 min
Midjourney V7: Competition-Ready Renders or Expensive Mood Boards?

Midjourney V7: Competition-Ready Renders or Expensive Mood Boards?

Midjourney V7 dropped last month, and architectural Twitter erupted with renders that look indistinguishable from professional visualization. We ran a controlled comparison.

Three scenes from recent projects, each rendered in V-Ray, Enscape, and Midjourney V7 (using detailed prompts describing the actual design). Five architects and three clients evaluated the results blind.

Image quality: Midjourney V7 wins for atmosphere and mood. Materials have a photographic quality that V-Ray achieves only with careful HDRI mapping and material tweaking. Clients consistently preferred the Midjourney images for "feeling" and "wow factor."

Accuracy: Midjourney loses decisively. Every generated image contained spatial inaccuracies: wrong proportions, impossible structural elements, inconsistent shadow directions. These are obvious to trained eyes but invisible to clients.

Our conclusion: Midjourney V7 is exceptional for early-stage concept visualization and competition boards. For client presentations and construction documentation, traditional rendering remains essential because accuracy is non-negotiable.

The hybrid workflow we have adopted: use Midjourney for concept exploration during schematic design, Enscape for real-time iteration during development, and V-Ray for final presentation renders.

Comp. Design 5 min
Rhino 9 + Grasshopper 2: Everything We Know So Far

Rhino 9 + Grasshopper 2: Everything We Know So Far

McNeel has been unusually open about Rhino 9 development, and the changes coming to Grasshopper are the most significant since the platform launched.

Grasshopper 2 is a ground-up rewrite. The current data tree system is being replaced with a more intuitive data model. McNeel calls it "data streams," and preliminary documentation suggests it behaves more like typed collections than the current branch-path system.

Multi-threading is the headline feature. Current Grasshopper runs single-threaded. Grasshopper 2 introduces native parallel execution for independent component branches. In WIP testing, embarrassingly parallel operations show 4-8x speed improvements.

Native AI integration is confirmed but details are sparse. Screenshots show a "Machine Learning" component category with nodes for inference, training data preparation, and model management. McNeel's David Rutten has mentioned integration with ONNX runtime.

Backward compatibility is the elephant in the room. McNeel has stated that existing GH1 definitions will run in compatibility mode, but not all plugins will work immediately. Our recommendation: start cataloging plugin dependencies now.

Timeline: McNeel targets a public WIP release of Rhino 9 in Q3 2026. Based on historical patterns, expect the full release in early 2027.